00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2457 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3722 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.070 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.070 The recommended git tool is: git 00:00:00.070 using credential 00000000-0000-0000-0000-000000000002 00:00:00.073 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.097 Fetching changes from the remote Git repository 00:00:00.099 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.133 Using shallow fetch with depth 1 00:00:00.133 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.133 > git --version # timeout=10 00:00:00.181 > git --version # 'git version 2.39.2' 00:00:00.181 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.228 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.228 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.541 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.550 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.561 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.561 > git config core.sparsecheckout # timeout=10 00:00:04.570 > git read-tree -mu HEAD # timeout=10 00:00:04.585 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.613 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.613 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.719 [Pipeline] Start of Pipeline 00:00:04.732 [Pipeline] library 00:00:04.734 Loading library shm_lib@master 00:00:04.734 Library shm_lib@master is cached. Copying from home. 00:00:04.751 [Pipeline] node 00:00:04.769 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:04.770 [Pipeline] { 00:00:04.781 [Pipeline] catchError 00:00:04.783 [Pipeline] { 00:00:04.796 [Pipeline] wrap 00:00:04.804 [Pipeline] { 00:00:04.815 [Pipeline] stage 00:00:04.817 [Pipeline] { (Prologue) 00:00:04.836 [Pipeline] echo 00:00:04.837 Node: VM-host-SM38 00:00:04.844 [Pipeline] cleanWs 00:00:04.854 [WS-CLEANUP] Deleting project workspace... 00:00:04.854 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.860 [WS-CLEANUP] done 00:00:05.047 [Pipeline] setCustomBuildProperty 00:00:05.135 [Pipeline] httpRequest 00:00:05.745 [Pipeline] echo 00:00:05.747 Sorcerer 10.211.164.20 is alive 00:00:05.756 [Pipeline] retry 00:00:05.758 [Pipeline] { 00:00:05.768 [Pipeline] httpRequest 00:00:05.776 HttpMethod: GET 00:00:05.777 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.779 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.782 Response Code: HTTP/1.1 200 OK 00:00:05.782 Success: Status code 200 is in the accepted range: 200,404 00:00:05.783 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.216 [Pipeline] } 00:00:06.231 [Pipeline] // retry 00:00:06.237 [Pipeline] sh 00:00:06.520 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.534 [Pipeline] httpRequest 00:00:07.361 [Pipeline] echo 00:00:07.363 Sorcerer 10.211.164.20 is alive 00:00:07.370 [Pipeline] retry 00:00:07.372 [Pipeline] { 00:00:07.384 [Pipeline] httpRequest 00:00:07.388 HttpMethod: GET 00:00:07.389 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:07.389 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:07.391 Response Code: HTTP/1.1 200 OK 00:00:07.392 Success: Status code 200 is in the accepted range: 200,404 00:00:07.392 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:30.707 [Pipeline] } 00:00:30.724 [Pipeline] // retry 00:00:30.732 [Pipeline] sh 00:00:31.025 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:33.574 [Pipeline] sh 00:00:33.857 + git -C spdk log --oneline -n5 00:00:33.857 e01cb43b8 mk/spdk.common.mk sed the minor version 00:00:33.857 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:00:33.857 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:00:33.857 66289a6db build: use VERSION file for storing version 00:00:33.857 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:00:33.890 [Pipeline] withCredentials 00:00:33.900 > git --version # timeout=10 00:00:33.910 > git --version # 'git version 2.39.2' 00:00:33.926 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:33.928 [Pipeline] { 00:00:33.938 [Pipeline] retry 00:00:33.940 [Pipeline] { 00:00:33.952 [Pipeline] sh 00:00:34.233 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:34.509 [Pipeline] } 00:00:34.526 [Pipeline] // retry 00:00:34.530 [Pipeline] } 00:00:34.546 [Pipeline] // withCredentials 00:00:34.554 [Pipeline] httpRequest 00:00:34.974 [Pipeline] echo 00:00:34.976 Sorcerer 10.211.164.20 is alive 00:00:34.985 [Pipeline] retry 00:00:34.987 [Pipeline] { 00:00:35.001 [Pipeline] httpRequest 00:00:35.007 HttpMethod: GET 00:00:35.007 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:35.008 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:35.017 Response Code: HTTP/1.1 200 OK 00:00:35.017 Success: Status code 200 is in the accepted range: 200,404 00:00:35.018 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:45.036 [Pipeline] } 00:01:45.053 [Pipeline] // retry 00:01:45.060 [Pipeline] sh 00:01:45.345 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:47.273 [Pipeline] sh 00:01:47.558 + git -C dpdk log --oneline -n5 00:01:47.558 caf0f5d395 version: 22.11.4 00:01:47.558 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:47.558 dc9c799c7d vhost: fix missing spinlock unlock 00:01:47.558 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:47.558 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:47.577 [Pipeline] writeFile 00:01:47.591 [Pipeline] sh 00:01:47.874 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:47.890 [Pipeline] sh 00:01:48.174 + cat autorun-spdk.conf 00:01:48.174 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:48.174 SPDK_TEST_NVME=1 00:01:48.174 SPDK_TEST_FTL=1 00:01:48.174 SPDK_TEST_ISAL=1 00:01:48.174 SPDK_RUN_ASAN=1 00:01:48.174 SPDK_RUN_UBSAN=1 00:01:48.174 SPDK_TEST_XNVME=1 00:01:48.174 SPDK_TEST_NVME_FDP=1 00:01:48.174 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:48.174 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:48.174 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:48.182 RUN_NIGHTLY=1 00:01:48.184 [Pipeline] } 00:01:48.198 [Pipeline] // stage 00:01:48.215 [Pipeline] stage 00:01:48.217 [Pipeline] { (Run VM) 00:01:48.230 [Pipeline] sh 00:01:48.515 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:48.515 + echo 'Start stage prepare_nvme.sh' 00:01:48.515 Start stage prepare_nvme.sh 00:01:48.515 + [[ -n 6 ]] 00:01:48.515 + disk_prefix=ex6 00:01:48.515 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:48.515 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:48.515 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:48.515 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:48.515 ++ SPDK_TEST_NVME=1 00:01:48.515 ++ SPDK_TEST_FTL=1 00:01:48.515 ++ SPDK_TEST_ISAL=1 00:01:48.515 ++ SPDK_RUN_ASAN=1 00:01:48.515 ++ SPDK_RUN_UBSAN=1 00:01:48.515 ++ SPDK_TEST_XNVME=1 00:01:48.515 ++ SPDK_TEST_NVME_FDP=1 00:01:48.515 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:48.515 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:48.515 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:48.515 ++ RUN_NIGHTLY=1 00:01:48.515 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:48.515 + nvme_files=() 00:01:48.515 + declare -A nvme_files 00:01:48.515 + backend_dir=/var/lib/libvirt/images/backends 00:01:48.515 + nvme_files['nvme.img']=5G 00:01:48.515 + nvme_files['nvme-cmb.img']=5G 00:01:48.515 + nvme_files['nvme-multi0.img']=4G 00:01:48.515 + nvme_files['nvme-multi1.img']=4G 00:01:48.515 + nvme_files['nvme-multi2.img']=4G 00:01:48.515 + nvme_files['nvme-openstack.img']=8G 00:01:48.515 + nvme_files['nvme-zns.img']=5G 00:01:48.515 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:48.515 + (( SPDK_TEST_FTL == 1 )) 00:01:48.515 + nvme_files["nvme-ftl.img"]=6G 00:01:48.515 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:48.515 + nvme_files["nvme-fdp.img"]=1G 00:01:48.515 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:48.515 + for nvme in "${!nvme_files[@]}" 00:01:48.515 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:48.515 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:48.515 + for nvme in "${!nvme_files[@]}" 00:01:48.515 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:01:48.775 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:48.775 + for nvme in "${!nvme_files[@]}" 00:01:48.775 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:48.775 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:48.775 + for nvme in "${!nvme_files[@]}" 00:01:48.775 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:48.775 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:48.775 + for nvme in "${!nvme_files[@]}" 00:01:48.775 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:48.775 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:48.775 + for nvme in "${!nvme_files[@]}" 00:01:48.775 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:48.775 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:48.775 + for nvme in "${!nvme_files[@]}" 00:01:48.775 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:48.775 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:48.775 + for nvme in "${!nvme_files[@]}" 00:01:48.775 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:01:49.035 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:49.035 + for nvme in "${!nvme_files[@]}" 00:01:49.035 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:49.035 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:49.035 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:49.035 + echo 'End stage prepare_nvme.sh' 00:01:49.035 End stage prepare_nvme.sh 00:01:49.048 [Pipeline] sh 00:01:49.334 + DISTRO=fedora39 00:01:49.334 + CPUS=10 00:01:49.334 + RAM=12288 00:01:49.334 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:49.334 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:49.334 00:01:49.334 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:49.334 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:49.334 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:49.334 HELP=0 00:01:49.334 DRY_RUN=0 00:01:49.334 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:01:49.334 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:49.334 NVME_AUTO_CREATE=0 00:01:49.334 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:01:49.334 NVME_CMB=,,,, 00:01:49.334 NVME_PMR=,,,, 00:01:49.334 NVME_ZNS=,,,, 00:01:49.334 NVME_MS=true,,,, 00:01:49.334 NVME_FDP=,,,on, 00:01:49.334 SPDK_VAGRANT_DISTRO=fedora39 00:01:49.334 SPDK_VAGRANT_VMCPU=10 00:01:49.334 SPDK_VAGRANT_VMRAM=12288 00:01:49.334 SPDK_VAGRANT_PROVIDER=libvirt 00:01:49.334 SPDK_VAGRANT_HTTP_PROXY= 00:01:49.334 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:49.334 SPDK_OPENSTACK_NETWORK=0 00:01:49.334 VAGRANT_PACKAGE_BOX=0 00:01:49.334 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:49.334 FORCE_DISTRO=true 00:01:49.334 VAGRANT_BOX_VERSION= 00:01:49.334 EXTRA_VAGRANTFILES= 00:01:49.334 NIC_MODEL=e1000 00:01:49.334 00:01:49.334 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:49.334 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:51.747 Bringing machine 'default' up with 'libvirt' provider... 00:01:52.319 ==> default: Creating image (snapshot of base box volume). 00:01:52.579 ==> default: Creating domain with the following settings... 00:01:52.579 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734138085_723b366456f1696a436c 00:01:52.579 ==> default: -- Domain type: kvm 00:01:52.579 ==> default: -- Cpus: 10 00:01:52.579 ==> default: -- Feature: acpi 00:01:52.579 ==> default: -- Feature: apic 00:01:52.579 ==> default: -- Feature: pae 00:01:52.579 ==> default: -- Memory: 12288M 00:01:52.579 ==> default: -- Memory Backing: hugepages: 00:01:52.579 ==> default: -- Management MAC: 00:01:52.579 ==> default: -- Loader: 00:01:52.579 ==> default: -- Nvram: 00:01:52.579 ==> default: -- Base box: spdk/fedora39 00:01:52.579 ==> default: -- Storage pool: default 00:01:52.579 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734138085_723b366456f1696a436c.img (20G) 00:01:52.579 ==> default: -- Volume Cache: default 00:01:52.579 ==> default: -- Kernel: 00:01:52.579 ==> default: -- Initrd: 00:01:52.579 ==> default: -- Graphics Type: vnc 00:01:52.579 ==> default: -- Graphics Port: -1 00:01:52.579 ==> default: -- Graphics IP: 127.0.0.1 00:01:52.579 ==> default: -- Graphics Password: Not defined 00:01:52.579 ==> default: -- Video Type: cirrus 00:01:52.579 ==> default: -- Video VRAM: 9216 00:01:52.579 ==> default: -- Sound Type: 00:01:52.579 ==> default: -- Keymap: en-us 00:01:52.579 ==> default: -- TPM Path: 00:01:52.579 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:52.579 ==> default: -- Command line args: 00:01:52.579 ==> default: -> value=-device, 00:01:52.579 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:52.579 ==> default: -> value=-drive, 00:01:52.579 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:52.579 ==> default: -> value=-device, 00:01:52.579 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:52.579 ==> default: -> value=-device, 00:01:52.579 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:52.579 ==> default: -> value=-drive, 00:01:52.579 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:01:52.579 ==> default: -> value=-device, 00:01:52.579 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:52.579 ==> default: -> value=-device, 00:01:52.579 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:52.579 ==> default: -> value=-drive, 00:01:52.579 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:52.579 ==> default: -> value=-device, 00:01:52.580 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:52.580 ==> default: -> value=-drive, 00:01:52.580 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:52.580 ==> default: -> value=-device, 00:01:52.580 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:52.580 ==> default: -> value=-drive, 00:01:52.580 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:52.580 ==> default: -> value=-device, 00:01:52.580 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:52.580 ==> default: -> value=-device, 00:01:52.580 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:52.580 ==> default: -> value=-device, 00:01:52.580 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:52.580 ==> default: -> value=-drive, 00:01:52.580 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:52.580 ==> default: -> value=-device, 00:01:52.580 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:52.580 ==> default: Creating shared folders metadata... 00:01:52.839 ==> default: Starting domain. 00:01:54.220 ==> default: Waiting for domain to get an IP address... 00:02:12.368 ==> default: Waiting for SSH to become available... 00:02:12.368 ==> default: Configuring and enabling network interfaces... 00:02:14.271 default: SSH address: 192.168.121.72:22 00:02:14.271 default: SSH username: vagrant 00:02:14.271 default: SSH auth method: private key 00:02:16.224 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:21.481 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:26.742 ==> default: Mounting SSHFS shared folder... 00:02:28.114 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:28.114 ==> default: Checking Mount.. 00:02:29.045 ==> default: Folder Successfully Mounted! 00:02:29.045 00:02:29.045 SUCCESS! 00:02:29.045 00:02:29.045 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:29.045 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:29.045 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:29.045 00:02:29.052 [Pipeline] } 00:02:29.066 [Pipeline] // stage 00:02:29.074 [Pipeline] dir 00:02:29.075 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:29.076 [Pipeline] { 00:02:29.087 [Pipeline] catchError 00:02:29.089 [Pipeline] { 00:02:29.100 [Pipeline] sh 00:02:29.374 + vagrant ssh-config --host vagrant 00:02:29.374 + sed -ne '/^Host/,$p' 00:02:29.374 + tee ssh_conf 00:02:31.906 Host vagrant 00:02:31.906 HostName 192.168.121.72 00:02:31.906 User vagrant 00:02:31.906 Port 22 00:02:31.906 UserKnownHostsFile /dev/null 00:02:31.906 StrictHostKeyChecking no 00:02:31.906 PasswordAuthentication no 00:02:31.906 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:31.906 IdentitiesOnly yes 00:02:31.906 LogLevel FATAL 00:02:31.906 ForwardAgent yes 00:02:31.906 ForwardX11 yes 00:02:31.906 00:02:31.963 [Pipeline] withEnv 00:02:31.964 [Pipeline] { 00:02:31.975 [Pipeline] sh 00:02:32.249 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:32.249 source /etc/os-release 00:02:32.249 [[ -e /image.version ]] && img=$(< /image.version) 00:02:32.249 # Minimal, systemd-like check. 00:02:32.249 if [[ -e /.dockerenv ]]; then 00:02:32.249 # Clear garbage from the node'\''s name: 00:02:32.249 # agt-er_autotest_547-896 -> autotest_547-896 00:02:32.249 # $HOSTNAME is the actual container id 00:02:32.249 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:32.249 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:32.249 # We can assume this is a mount from a host where container is running, 00:02:32.249 # so fetch its hostname to easily identify the target swarm worker. 00:02:32.249 container="$(< /etc/hostname) ($agent)" 00:02:32.249 else 00:02:32.249 # Fallback 00:02:32.249 container=$agent 00:02:32.249 fi 00:02:32.249 fi 00:02:32.249 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:32.249 ' 00:02:32.258 [Pipeline] } 00:02:32.271 [Pipeline] // withEnv 00:02:32.277 [Pipeline] setCustomBuildProperty 00:02:32.288 [Pipeline] stage 00:02:32.290 [Pipeline] { (Tests) 00:02:32.302 [Pipeline] sh 00:02:32.576 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:32.586 [Pipeline] sh 00:02:32.863 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:33.134 [Pipeline] timeout 00:02:33.134 Timeout set to expire in 50 min 00:02:33.136 [Pipeline] { 00:02:33.151 [Pipeline] sh 00:02:33.428 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:33.686 HEAD is now at e01cb43b8 mk/spdk.common.mk sed the minor version 00:02:33.696 [Pipeline] sh 00:02:33.971 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:33.981 [Pipeline] sh 00:02:34.252 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:34.265 [Pipeline] sh 00:02:34.541 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:34.541 ++ readlink -f spdk_repo 00:02:34.541 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:34.541 + [[ -n /home/vagrant/spdk_repo ]] 00:02:34.541 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:34.541 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:34.541 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:34.541 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:34.541 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:34.541 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:34.541 + cd /home/vagrant/spdk_repo 00:02:34.541 + source /etc/os-release 00:02:34.541 ++ NAME='Fedora Linux' 00:02:34.541 ++ VERSION='39 (Cloud Edition)' 00:02:34.541 ++ ID=fedora 00:02:34.541 ++ VERSION_ID=39 00:02:34.541 ++ VERSION_CODENAME= 00:02:34.541 ++ PLATFORM_ID=platform:f39 00:02:34.541 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:34.541 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:34.541 ++ LOGO=fedora-logo-icon 00:02:34.541 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:34.541 ++ HOME_URL=https://fedoraproject.org/ 00:02:34.541 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:34.541 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:34.541 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:34.541 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:34.541 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:34.541 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:34.541 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:34.541 ++ SUPPORT_END=2024-11-12 00:02:34.541 ++ VARIANT='Cloud Edition' 00:02:34.541 ++ VARIANT_ID=cloud 00:02:34.541 + uname -a 00:02:34.541 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:34.541 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:35.107 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:35.107 Hugepages 00:02:35.107 node hugesize free / total 00:02:35.107 node0 1048576kB 0 / 0 00:02:35.107 node0 2048kB 0 / 0 00:02:35.107 00:02:35.107 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:35.107 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:35.107 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:35.365 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:35.365 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:35.365 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:35.365 + rm -f /tmp/spdk-ld-path 00:02:35.365 + source autorun-spdk.conf 00:02:35.365 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:35.365 ++ SPDK_TEST_NVME=1 00:02:35.365 ++ SPDK_TEST_FTL=1 00:02:35.365 ++ SPDK_TEST_ISAL=1 00:02:35.365 ++ SPDK_RUN_ASAN=1 00:02:35.365 ++ SPDK_RUN_UBSAN=1 00:02:35.365 ++ SPDK_TEST_XNVME=1 00:02:35.365 ++ SPDK_TEST_NVME_FDP=1 00:02:35.365 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:35.365 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:35.365 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:35.365 ++ RUN_NIGHTLY=1 00:02:35.365 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:35.365 + [[ -n '' ]] 00:02:35.365 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:35.365 + for M in /var/spdk/build-*-manifest.txt 00:02:35.365 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:35.365 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.365 + for M in /var/spdk/build-*-manifest.txt 00:02:35.365 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:35.365 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.365 + for M in /var/spdk/build-*-manifest.txt 00:02:35.365 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:35.365 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.365 ++ uname 00:02:35.365 + [[ Linux == \L\i\n\u\x ]] 00:02:35.365 + sudo dmesg -T 00:02:35.365 + sudo dmesg --clear 00:02:35.365 + dmesg_pid=5743 00:02:35.365 + [[ Fedora Linux == FreeBSD ]] 00:02:35.365 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:35.365 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:35.365 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:35.365 + sudo dmesg -Tw 00:02:35.365 + [[ -x /usr/src/fio-static/fio ]] 00:02:35.365 + export FIO_BIN=/usr/src/fio-static/fio 00:02:35.365 + FIO_BIN=/usr/src/fio-static/fio 00:02:35.365 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:35.365 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:35.365 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:35.365 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:35.365 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:35.365 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:35.365 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:35.365 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:35.365 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:35.365 01:02:08 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:35.365 01:02:08 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:35.365 01:02:08 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:35.365 01:02:08 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:35.365 01:02:08 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:35.365 01:02:08 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:35.365 01:02:08 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:35.365 01:02:08 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:35.365 01:02:08 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:35.365 01:02:08 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:35.365 01:02:08 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:35.365 01:02:08 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:35.365 01:02:08 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:35.365 01:02:08 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:35.365 01:02:08 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:35.365 01:02:08 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:35.365 01:02:08 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:35.365 01:02:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:35.365 01:02:08 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:35.365 01:02:08 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:35.365 01:02:08 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:35.365 01:02:08 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:35.365 01:02:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.366 01:02:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.366 01:02:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.366 01:02:08 -- paths/export.sh@5 -- $ export PATH 00:02:35.366 01:02:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.366 01:02:08 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:35.366 01:02:08 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:35.366 01:02:08 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734138128.XXXXXX 00:02:35.366 01:02:08 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734138128.EFzKiT 00:02:35.366 01:02:08 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:35.366 01:02:08 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:02:35.366 01:02:08 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:35.623 01:02:08 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:35.623 01:02:08 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:35.623 01:02:08 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:35.623 01:02:08 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:35.623 01:02:08 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:35.623 01:02:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.623 01:02:08 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:35.623 01:02:08 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:35.623 01:02:08 -- pm/common@17 -- $ local monitor 00:02:35.623 01:02:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.623 01:02:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.623 01:02:08 -- pm/common@25 -- $ sleep 1 00:02:35.623 01:02:08 -- pm/common@21 -- $ date +%s 00:02:35.623 01:02:08 -- pm/common@21 -- $ date +%s 00:02:35.623 01:02:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734138128 00:02:35.623 01:02:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734138128 00:02:35.623 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734138128_collect-cpu-load.pm.log 00:02:35.623 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734138128_collect-vmstat.pm.log 00:02:36.555 01:02:09 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:36.555 01:02:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:36.555 01:02:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:36.555 01:02:09 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:36.555 01:02:09 -- spdk/autobuild.sh@16 -- $ date -u 00:02:36.555 Sat Dec 14 01:02:09 AM UTC 2024 00:02:36.555 01:02:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:36.555 v25.01-rc1-2-ge01cb43b8 00:02:36.555 01:02:10 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:36.555 01:02:10 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:36.555 01:02:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:36.555 01:02:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:36.555 01:02:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.555 ************************************ 00:02:36.555 START TEST asan 00:02:36.555 ************************************ 00:02:36.555 using asan 00:02:36.555 01:02:10 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:36.555 00:02:36.555 real 0m0.000s 00:02:36.555 user 0m0.000s 00:02:36.555 sys 0m0.000s 00:02:36.555 01:02:10 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:36.555 ************************************ 00:02:36.555 END TEST asan 00:02:36.555 ************************************ 00:02:36.555 01:02:10 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:36.555 01:02:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:36.555 01:02:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:36.555 01:02:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:36.555 01:02:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:36.555 01:02:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.555 ************************************ 00:02:36.555 START TEST ubsan 00:02:36.555 ************************************ 00:02:36.555 using ubsan 00:02:36.555 01:02:10 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:36.555 00:02:36.555 real 0m0.000s 00:02:36.555 user 0m0.000s 00:02:36.555 sys 0m0.000s 00:02:36.555 01:02:10 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:36.555 ************************************ 00:02:36.555 END TEST ubsan 00:02:36.555 01:02:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:36.555 ************************************ 00:02:36.555 01:02:10 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:36.555 01:02:10 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:36.555 01:02:10 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:36.555 01:02:10 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:36.555 01:02:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:36.555 01:02:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.555 ************************************ 00:02:36.555 START TEST build_native_dpdk 00:02:36.555 ************************************ 00:02:36.555 01:02:10 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:36.555 caf0f5d395 version: 22.11.4 00:02:36.555 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:36.555 dc9c799c7d vhost: fix missing spinlock unlock 00:02:36.555 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:36.555 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:36.555 01:02:10 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:36.556 01:02:10 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:36.556 01:02:10 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:36.556 patching file config/rte_config.h 00:02:36.556 Hunk #1 succeeded at 60 (offset 1 line). 00:02:36.556 01:02:10 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:36.556 01:02:10 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:36.556 patching file lib/pcapng/rte_pcapng.c 00:02:36.556 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:36.556 01:02:10 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:36.556 01:02:10 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:36.556 01:02:10 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:36.556 01:02:10 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:36.556 01:02:10 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:36.556 01:02:10 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:36.556 01:02:10 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:40.734 The Meson build system 00:02:40.734 Version: 1.5.0 00:02:40.734 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:40.734 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:40.734 Build type: native build 00:02:40.734 Program cat found: YES (/usr/bin/cat) 00:02:40.734 Project name: DPDK 00:02:40.734 Project version: 22.11.4 00:02:40.734 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:40.734 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:40.734 Host machine cpu family: x86_64 00:02:40.734 Host machine cpu: x86_64 00:02:40.734 Message: ## Building in Developer Mode ## 00:02:40.734 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:40.734 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:40.734 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:40.734 Program objdump found: YES (/usr/bin/objdump) 00:02:40.735 Program python3 found: YES (/usr/bin/python3) 00:02:40.735 Program cat found: YES (/usr/bin/cat) 00:02:40.735 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:40.735 Checking for size of "void *" : 8 00:02:40.735 Checking for size of "void *" : 8 (cached) 00:02:40.735 Library m found: YES 00:02:40.735 Library numa found: YES 00:02:40.735 Has header "numaif.h" : YES 00:02:40.735 Library fdt found: NO 00:02:40.735 Library execinfo found: NO 00:02:40.735 Has header "execinfo.h" : YES 00:02:40.735 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:40.735 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:40.735 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:40.735 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:40.735 Run-time dependency openssl found: YES 3.1.1 00:02:40.735 Run-time dependency libpcap found: YES 1.10.4 00:02:40.735 Has header "pcap.h" with dependency libpcap: YES 00:02:40.735 Compiler for C supports arguments -Wcast-qual: YES 00:02:40.735 Compiler for C supports arguments -Wdeprecated: YES 00:02:40.735 Compiler for C supports arguments -Wformat: YES 00:02:40.735 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:40.735 Compiler for C supports arguments -Wformat-security: NO 00:02:40.735 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:40.735 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:40.735 Compiler for C supports arguments -Wnested-externs: YES 00:02:40.735 Compiler for C supports arguments -Wold-style-definition: YES 00:02:40.735 Compiler for C supports arguments -Wpointer-arith: YES 00:02:40.735 Compiler for C supports arguments -Wsign-compare: YES 00:02:40.735 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:40.735 Compiler for C supports arguments -Wundef: YES 00:02:40.735 Compiler for C supports arguments -Wwrite-strings: YES 00:02:40.735 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:40.735 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:40.735 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:40.735 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:40.735 Compiler for C supports arguments -mavx512f: YES 00:02:40.735 Checking if "AVX512 checking" compiles: YES 00:02:40.735 Fetching value of define "__SSE4_2__" : 1 00:02:40.735 Fetching value of define "__AES__" : 1 00:02:40.735 Fetching value of define "__AVX__" : 1 00:02:40.735 Fetching value of define "__AVX2__" : 1 00:02:40.735 Fetching value of define "__AVX512BW__" : 1 00:02:40.735 Fetching value of define "__AVX512CD__" : 1 00:02:40.735 Fetching value of define "__AVX512DQ__" : 1 00:02:40.735 Fetching value of define "__AVX512F__" : 1 00:02:40.735 Fetching value of define "__AVX512VL__" : 1 00:02:40.735 Fetching value of define "__PCLMUL__" : 1 00:02:40.735 Fetching value of define "__RDRND__" : 1 00:02:40.735 Fetching value of define "__RDSEED__" : 1 00:02:40.735 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:40.735 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:40.735 Message: lib/kvargs: Defining dependency "kvargs" 00:02:40.735 Message: lib/telemetry: Defining dependency "telemetry" 00:02:40.735 Checking for function "getentropy" : YES 00:02:40.735 Message: lib/eal: Defining dependency "eal" 00:02:40.735 Message: lib/ring: Defining dependency "ring" 00:02:40.735 Message: lib/rcu: Defining dependency "rcu" 00:02:40.735 Message: lib/mempool: Defining dependency "mempool" 00:02:40.735 Message: lib/mbuf: Defining dependency "mbuf" 00:02:40.735 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:40.735 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:40.735 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:40.735 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:40.735 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:40.735 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:40.735 Compiler for C supports arguments -mpclmul: YES 00:02:40.735 Compiler for C supports arguments -maes: YES 00:02:40.735 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:40.735 Compiler for C supports arguments -mavx512bw: YES 00:02:40.735 Compiler for C supports arguments -mavx512dq: YES 00:02:40.735 Compiler for C supports arguments -mavx512vl: YES 00:02:40.735 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:40.735 Compiler for C supports arguments -mavx2: YES 00:02:40.735 Compiler for C supports arguments -mavx: YES 00:02:40.735 Message: lib/net: Defining dependency "net" 00:02:40.735 Message: lib/meter: Defining dependency "meter" 00:02:40.735 Message: lib/ethdev: Defining dependency "ethdev" 00:02:40.735 Message: lib/pci: Defining dependency "pci" 00:02:40.735 Message: lib/cmdline: Defining dependency "cmdline" 00:02:40.735 Message: lib/metrics: Defining dependency "metrics" 00:02:40.735 Message: lib/hash: Defining dependency "hash" 00:02:40.735 Message: lib/timer: Defining dependency "timer" 00:02:40.735 Fetching value of define "__AVX2__" : 1 (cached) 00:02:40.735 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:40.735 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:40.735 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:40.735 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:40.735 Message: lib/acl: Defining dependency "acl" 00:02:40.735 Message: lib/bbdev: Defining dependency "bbdev" 00:02:40.735 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:40.735 Run-time dependency libelf found: YES 0.191 00:02:40.735 Message: lib/bpf: Defining dependency "bpf" 00:02:40.735 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:40.735 Message: lib/compressdev: Defining dependency "compressdev" 00:02:40.735 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:40.735 Message: lib/distributor: Defining dependency "distributor" 00:02:40.735 Message: lib/efd: Defining dependency "efd" 00:02:40.735 Message: lib/eventdev: Defining dependency "eventdev" 00:02:40.735 Message: lib/gpudev: Defining dependency "gpudev" 00:02:40.735 Message: lib/gro: Defining dependency "gro" 00:02:40.735 Message: lib/gso: Defining dependency "gso" 00:02:40.735 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:40.735 Message: lib/jobstats: Defining dependency "jobstats" 00:02:40.735 Message: lib/latencystats: Defining dependency "latencystats" 00:02:40.735 Message: lib/lpm: Defining dependency "lpm" 00:02:40.735 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:40.735 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:40.735 Fetching value of define "__AVX512IFMA__" : 1 00:02:40.735 Message: lib/member: Defining dependency "member" 00:02:40.735 Message: lib/pcapng: Defining dependency "pcapng" 00:02:40.735 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:40.735 Message: lib/power: Defining dependency "power" 00:02:40.735 Message: lib/rawdev: Defining dependency "rawdev" 00:02:40.735 Message: lib/regexdev: Defining dependency "regexdev" 00:02:40.735 Message: lib/dmadev: Defining dependency "dmadev" 00:02:40.735 Message: lib/rib: Defining dependency "rib" 00:02:40.735 Message: lib/reorder: Defining dependency "reorder" 00:02:40.735 Message: lib/sched: Defining dependency "sched" 00:02:40.735 Message: lib/security: Defining dependency "security" 00:02:40.735 Message: lib/stack: Defining dependency "stack" 00:02:40.735 Has header "linux/userfaultfd.h" : YES 00:02:40.735 Message: lib/vhost: Defining dependency "vhost" 00:02:40.735 Message: lib/ipsec: Defining dependency "ipsec" 00:02:40.735 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:40.735 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:40.735 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:40.735 Message: lib/fib: Defining dependency "fib" 00:02:40.735 Message: lib/port: Defining dependency "port" 00:02:40.735 Message: lib/pdump: Defining dependency "pdump" 00:02:40.735 Message: lib/table: Defining dependency "table" 00:02:40.735 Message: lib/pipeline: Defining dependency "pipeline" 00:02:40.735 Message: lib/graph: Defining dependency "graph" 00:02:40.735 Message: lib/node: Defining dependency "node" 00:02:40.735 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:40.735 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:40.735 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:40.735 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:40.735 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:40.735 Compiler for C supports arguments -Wno-unused-value: YES 00:02:40.735 Compiler for C supports arguments -Wno-format: YES 00:02:40.735 Compiler for C supports arguments -Wno-format-security: YES 00:02:40.735 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:40.735 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:40.735 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:40.735 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:41.668 Fetching value of define "__AVX2__" : 1 (cached) 00:02:41.668 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:41.669 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:41.669 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:41.669 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:41.669 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:41.669 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:41.669 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:41.669 Configuring doxy-api.conf using configuration 00:02:41.669 Program sphinx-build found: NO 00:02:41.669 Configuring rte_build_config.h using configuration 00:02:41.669 Message: 00:02:41.669 ================= 00:02:41.669 Applications Enabled 00:02:41.669 ================= 00:02:41.669 00:02:41.669 apps: 00:02:41.669 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:41.669 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:41.669 test-security-perf, 00:02:41.669 00:02:41.669 Message: 00:02:41.669 ================= 00:02:41.669 Libraries Enabled 00:02:41.669 ================= 00:02:41.669 00:02:41.669 libs: 00:02:41.669 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:41.669 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:41.669 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:41.669 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:41.669 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:41.669 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:41.669 table, pipeline, graph, node, 00:02:41.669 00:02:41.669 Message: 00:02:41.669 =============== 00:02:41.669 Drivers Enabled 00:02:41.669 =============== 00:02:41.669 00:02:41.669 common: 00:02:41.669 00:02:41.669 bus: 00:02:41.669 pci, vdev, 00:02:41.669 mempool: 00:02:41.669 ring, 00:02:41.669 dma: 00:02:41.669 00:02:41.669 net: 00:02:41.669 i40e, 00:02:41.669 raw: 00:02:41.669 00:02:41.669 crypto: 00:02:41.669 00:02:41.669 compress: 00:02:41.669 00:02:41.669 regex: 00:02:41.669 00:02:41.669 vdpa: 00:02:41.669 00:02:41.669 event: 00:02:41.669 00:02:41.669 baseband: 00:02:41.669 00:02:41.669 gpu: 00:02:41.669 00:02:41.669 00:02:41.669 Message: 00:02:41.669 ================= 00:02:41.669 Content Skipped 00:02:41.669 ================= 00:02:41.669 00:02:41.669 apps: 00:02:41.669 00:02:41.669 libs: 00:02:41.669 kni: explicitly disabled via build config (deprecated lib) 00:02:41.669 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:41.669 00:02:41.669 drivers: 00:02:41.669 common/cpt: not in enabled drivers build config 00:02:41.669 common/dpaax: not in enabled drivers build config 00:02:41.669 common/iavf: not in enabled drivers build config 00:02:41.669 common/idpf: not in enabled drivers build config 00:02:41.669 common/mvep: not in enabled drivers build config 00:02:41.669 common/octeontx: not in enabled drivers build config 00:02:41.669 bus/auxiliary: not in enabled drivers build config 00:02:41.669 bus/dpaa: not in enabled drivers build config 00:02:41.669 bus/fslmc: not in enabled drivers build config 00:02:41.669 bus/ifpga: not in enabled drivers build config 00:02:41.669 bus/vmbus: not in enabled drivers build config 00:02:41.669 common/cnxk: not in enabled drivers build config 00:02:41.669 common/mlx5: not in enabled drivers build config 00:02:41.669 common/qat: not in enabled drivers build config 00:02:41.669 common/sfc_efx: not in enabled drivers build config 00:02:41.669 mempool/bucket: not in enabled drivers build config 00:02:41.669 mempool/cnxk: not in enabled drivers build config 00:02:41.669 mempool/dpaa: not in enabled drivers build config 00:02:41.669 mempool/dpaa2: not in enabled drivers build config 00:02:41.669 mempool/octeontx: not in enabled drivers build config 00:02:41.669 mempool/stack: not in enabled drivers build config 00:02:41.669 dma/cnxk: not in enabled drivers build config 00:02:41.669 dma/dpaa: not in enabled drivers build config 00:02:41.669 dma/dpaa2: not in enabled drivers build config 00:02:41.669 dma/hisilicon: not in enabled drivers build config 00:02:41.669 dma/idxd: not in enabled drivers build config 00:02:41.669 dma/ioat: not in enabled drivers build config 00:02:41.669 dma/skeleton: not in enabled drivers build config 00:02:41.669 net/af_packet: not in enabled drivers build config 00:02:41.669 net/af_xdp: not in enabled drivers build config 00:02:41.669 net/ark: not in enabled drivers build config 00:02:41.669 net/atlantic: not in enabled drivers build config 00:02:41.669 net/avp: not in enabled drivers build config 00:02:41.669 net/axgbe: not in enabled drivers build config 00:02:41.669 net/bnx2x: not in enabled drivers build config 00:02:41.669 net/bnxt: not in enabled drivers build config 00:02:41.669 net/bonding: not in enabled drivers build config 00:02:41.669 net/cnxk: not in enabled drivers build config 00:02:41.669 net/cxgbe: not in enabled drivers build config 00:02:41.669 net/dpaa: not in enabled drivers build config 00:02:41.669 net/dpaa2: not in enabled drivers build config 00:02:41.669 net/e1000: not in enabled drivers build config 00:02:41.669 net/ena: not in enabled drivers build config 00:02:41.669 net/enetc: not in enabled drivers build config 00:02:41.669 net/enetfec: not in enabled drivers build config 00:02:41.669 net/enic: not in enabled drivers build config 00:02:41.669 net/failsafe: not in enabled drivers build config 00:02:41.669 net/fm10k: not in enabled drivers build config 00:02:41.669 net/gve: not in enabled drivers build config 00:02:41.669 net/hinic: not in enabled drivers build config 00:02:41.669 net/hns3: not in enabled drivers build config 00:02:41.669 net/iavf: not in enabled drivers build config 00:02:41.669 net/ice: not in enabled drivers build config 00:02:41.669 net/idpf: not in enabled drivers build config 00:02:41.669 net/igc: not in enabled drivers build config 00:02:41.669 net/ionic: not in enabled drivers build config 00:02:41.669 net/ipn3ke: not in enabled drivers build config 00:02:41.669 net/ixgbe: not in enabled drivers build config 00:02:41.669 net/kni: not in enabled drivers build config 00:02:41.669 net/liquidio: not in enabled drivers build config 00:02:41.669 net/mana: not in enabled drivers build config 00:02:41.669 net/memif: not in enabled drivers build config 00:02:41.669 net/mlx4: not in enabled drivers build config 00:02:41.669 net/mlx5: not in enabled drivers build config 00:02:41.669 net/mvneta: not in enabled drivers build config 00:02:41.669 net/mvpp2: not in enabled drivers build config 00:02:41.669 net/netvsc: not in enabled drivers build config 00:02:41.669 net/nfb: not in enabled drivers build config 00:02:41.669 net/nfp: not in enabled drivers build config 00:02:41.669 net/ngbe: not in enabled drivers build config 00:02:41.669 net/null: not in enabled drivers build config 00:02:41.669 net/octeontx: not in enabled drivers build config 00:02:41.669 net/octeon_ep: not in enabled drivers build config 00:02:41.669 net/pcap: not in enabled drivers build config 00:02:41.669 net/pfe: not in enabled drivers build config 00:02:41.669 net/qede: not in enabled drivers build config 00:02:41.669 net/ring: not in enabled drivers build config 00:02:41.669 net/sfc: not in enabled drivers build config 00:02:41.669 net/softnic: not in enabled drivers build config 00:02:41.669 net/tap: not in enabled drivers build config 00:02:41.669 net/thunderx: not in enabled drivers build config 00:02:41.669 net/txgbe: not in enabled drivers build config 00:02:41.669 net/vdev_netvsc: not in enabled drivers build config 00:02:41.669 net/vhost: not in enabled drivers build config 00:02:41.669 net/virtio: not in enabled drivers build config 00:02:41.669 net/vmxnet3: not in enabled drivers build config 00:02:41.669 raw/cnxk_bphy: not in enabled drivers build config 00:02:41.669 raw/cnxk_gpio: not in enabled drivers build config 00:02:41.669 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:41.669 raw/ifpga: not in enabled drivers build config 00:02:41.669 raw/ntb: not in enabled drivers build config 00:02:41.669 raw/skeleton: not in enabled drivers build config 00:02:41.669 crypto/armv8: not in enabled drivers build config 00:02:41.669 crypto/bcmfs: not in enabled drivers build config 00:02:41.669 crypto/caam_jr: not in enabled drivers build config 00:02:41.669 crypto/ccp: not in enabled drivers build config 00:02:41.669 crypto/cnxk: not in enabled drivers build config 00:02:41.669 crypto/dpaa_sec: not in enabled drivers build config 00:02:41.669 crypto/dpaa2_sec: not in enabled drivers build config 00:02:41.669 crypto/ipsec_mb: not in enabled drivers build config 00:02:41.669 crypto/mlx5: not in enabled drivers build config 00:02:41.669 crypto/mvsam: not in enabled drivers build config 00:02:41.669 crypto/nitrox: not in enabled drivers build config 00:02:41.669 crypto/null: not in enabled drivers build config 00:02:41.669 crypto/octeontx: not in enabled drivers build config 00:02:41.670 crypto/openssl: not in enabled drivers build config 00:02:41.670 crypto/scheduler: not in enabled drivers build config 00:02:41.670 crypto/uadk: not in enabled drivers build config 00:02:41.670 crypto/virtio: not in enabled drivers build config 00:02:41.670 compress/isal: not in enabled drivers build config 00:02:41.670 compress/mlx5: not in enabled drivers build config 00:02:41.670 compress/octeontx: not in enabled drivers build config 00:02:41.670 compress/zlib: not in enabled drivers build config 00:02:41.670 regex/mlx5: not in enabled drivers build config 00:02:41.670 regex/cn9k: not in enabled drivers build config 00:02:41.670 vdpa/ifc: not in enabled drivers build config 00:02:41.670 vdpa/mlx5: not in enabled drivers build config 00:02:41.670 vdpa/sfc: not in enabled drivers build config 00:02:41.670 event/cnxk: not in enabled drivers build config 00:02:41.670 event/dlb2: not in enabled drivers build config 00:02:41.670 event/dpaa: not in enabled drivers build config 00:02:41.670 event/dpaa2: not in enabled drivers build config 00:02:41.670 event/dsw: not in enabled drivers build config 00:02:41.670 event/opdl: not in enabled drivers build config 00:02:41.670 event/skeleton: not in enabled drivers build config 00:02:41.670 event/sw: not in enabled drivers build config 00:02:41.670 event/octeontx: not in enabled drivers build config 00:02:41.670 baseband/acc: not in enabled drivers build config 00:02:41.670 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:41.670 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:41.670 baseband/la12xx: not in enabled drivers build config 00:02:41.670 baseband/null: not in enabled drivers build config 00:02:41.670 baseband/turbo_sw: not in enabled drivers build config 00:02:41.670 gpu/cuda: not in enabled drivers build config 00:02:41.670 00:02:41.670 00:02:41.670 Build targets in project: 309 00:02:41.670 00:02:41.670 DPDK 22.11.4 00:02:41.670 00:02:41.670 User defined options 00:02:41.670 libdir : lib 00:02:41.670 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:41.670 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:41.670 c_link_args : 00:02:41.670 enable_docs : false 00:02:41.670 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:41.670 enable_kmods : false 00:02:41.670 machine : native 00:02:41.670 tests : false 00:02:41.670 00:02:41.670 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:41.670 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:41.928 01:02:15 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:41.928 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:41.928 [1/738] Generating lib/rte_telemetry_def with a custom command 00:02:41.929 [2/738] Generating lib/rte_kvargs_mingw with a custom command 00:02:41.929 [3/738] Generating lib/rte_telemetry_mingw with a custom command 00:02:41.929 [4/738] Generating lib/rte_kvargs_def with a custom command 00:02:41.929 [5/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:41.929 [6/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:41.929 [7/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:41.929 [8/738] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:41.929 [9/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:41.929 [10/738] Linking static target lib/librte_kvargs.a 00:02:41.929 [11/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:41.929 [12/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:41.929 [13/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:41.929 [14/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:42.186 [15/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:42.186 [16/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:42.186 [17/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:42.186 [18/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:42.186 [19/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:42.186 [20/738] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.186 [21/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:42.186 [22/738] Linking target lib/librte_kvargs.so.23.0 00:02:42.186 [23/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:42.186 [24/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:42.186 [25/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:42.444 [26/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:42.444 [27/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:42.444 [28/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:42.444 [29/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:42.444 [30/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:42.444 [31/738] Linking static target lib/librte_telemetry.a 00:02:42.444 [32/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:42.444 [33/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:42.444 [34/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:42.444 [35/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:42.444 [36/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:42.444 [37/738] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:42.444 [38/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:42.444 [39/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:42.444 [40/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:42.444 [41/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:42.701 [42/738] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.702 [43/738] Linking target lib/librte_telemetry.so.23.0 00:02:42.702 [44/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:42.702 [45/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:42.702 [46/738] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:42.702 [47/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:42.702 [48/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:42.702 [49/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:42.702 [50/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:42.959 [51/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:42.959 [52/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:42.959 [53/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:42.959 [54/738] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:42.959 [55/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:42.959 [56/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:42.959 [57/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:42.959 [58/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:42.959 [59/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:42.959 [60/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:42.959 [61/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:42.959 [62/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:42.959 [63/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:42.959 [64/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:42.959 [65/738] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:42.959 [66/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:42.959 [67/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:42.959 [68/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:42.959 [69/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:43.217 [70/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:43.217 [71/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:43.217 [72/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:43.217 [73/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:43.217 [74/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:43.217 [75/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:43.217 [76/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:43.217 [77/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:43.217 [78/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:43.217 [79/738] Generating lib/rte_eal_def with a custom command 00:02:43.217 [80/738] Generating lib/rte_eal_mingw with a custom command 00:02:43.217 [81/738] Generating lib/rte_ring_def with a custom command 00:02:43.217 [82/738] Generating lib/rte_ring_mingw with a custom command 00:02:43.217 [83/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:43.217 [84/738] Generating lib/rte_rcu_def with a custom command 00:02:43.217 [85/738] Generating lib/rte_rcu_mingw with a custom command 00:02:43.217 [86/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:43.217 [87/738] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:43.217 [88/738] Linking static target lib/librte_ring.a 00:02:43.475 [89/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:43.475 [90/738] Generating lib/rte_mempool_def with a custom command 00:02:43.475 [91/738] Generating lib/rte_mempool_mingw with a custom command 00:02:43.475 [92/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:43.475 [93/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:43.475 [94/738] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.475 [95/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:43.475 [96/738] Generating lib/rte_mbuf_def with a custom command 00:02:43.733 [97/738] Generating lib/rte_mbuf_mingw with a custom command 00:02:43.733 [98/738] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:43.733 [99/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:43.733 [100/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:43.733 [101/738] Linking static target lib/librte_eal.a 00:02:43.733 [102/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:43.733 [103/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:43.991 [104/738] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:43.991 [105/738] Linking static target lib/librte_rcu.a 00:02:43.991 [106/738] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:43.991 [107/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:43.991 [108/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:43.991 [109/738] Linking static target lib/librte_mempool.a 00:02:43.991 [110/738] Generating lib/rte_net_def with a custom command 00:02:43.991 [111/738] Generating lib/rte_net_mingw with a custom command 00:02:43.991 [112/738] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:43.991 [113/738] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:43.991 [114/738] Generating lib/rte_meter_mingw with a custom command 00:02:43.991 [115/738] Generating lib/rte_meter_def with a custom command 00:02:43.991 [116/738] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:44.248 [117/738] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:44.248 [118/738] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.248 [119/738] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:44.248 [120/738] Linking static target lib/librte_meter.a 00:02:44.248 [121/738] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:44.248 [122/738] Linking static target lib/librte_net.a 00:02:44.248 [123/738] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.506 [124/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:44.506 [125/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:44.506 [126/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:44.506 [127/738] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.506 [128/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:44.506 [129/738] Linking static target lib/librte_mbuf.a 00:02:44.506 [130/738] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.506 [131/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:44.506 [132/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:44.763 [133/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:44.763 [134/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:44.763 [135/738] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.020 [136/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:45.020 [137/738] Generating lib/rte_ethdev_def with a custom command 00:02:45.020 [138/738] Generating lib/rte_ethdev_mingw with a custom command 00:02:45.020 [139/738] Generating lib/rte_pci_def with a custom command 00:02:45.020 [140/738] Generating lib/rte_pci_mingw with a custom command 00:02:45.020 [141/738] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:45.020 [142/738] Linking static target lib/librte_pci.a 00:02:45.020 [143/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:45.020 [144/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:45.020 [145/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:45.020 [146/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:45.020 [147/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:45.020 [148/738] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.020 [149/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:45.276 [150/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:45.276 [151/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:45.276 [152/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:45.276 [153/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:45.276 [154/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:45.276 [155/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:45.276 [156/738] Generating lib/rte_cmdline_def with a custom command 00:02:45.276 [157/738] Generating lib/rte_cmdline_mingw with a custom command 00:02:45.276 [158/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:45.276 [159/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:45.276 [160/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:45.276 [161/738] Generating lib/rte_metrics_mingw with a custom command 00:02:45.276 [162/738] Generating lib/rte_metrics_def with a custom command 00:02:45.276 [163/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:45.276 [164/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:45.276 [165/738] Generating lib/rte_hash_def with a custom command 00:02:45.276 [166/738] Generating lib/rte_hash_mingw with a custom command 00:02:45.276 [167/738] Generating lib/rte_timer_def with a custom command 00:02:45.276 [168/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:45.531 [169/738] Linking static target lib/librte_cmdline.a 00:02:45.531 [170/738] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:45.531 [171/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:45.531 [172/738] Generating lib/rte_timer_mingw with a custom command 00:02:45.531 [173/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:45.531 [174/738] Linking static target lib/librte_metrics.a 00:02:45.787 [175/738] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:45.787 [176/738] Linking static target lib/librte_timer.a 00:02:45.787 [177/738] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:45.787 [178/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:45.787 [179/738] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.045 [180/738] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.045 [181/738] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:46.045 [182/738] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.045 [183/738] Generating lib/rte_acl_def with a custom command 00:02:46.045 [184/738] Generating lib/rte_acl_mingw with a custom command 00:02:46.045 [185/738] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:46.045 [186/738] Generating lib/rte_bbdev_def with a custom command 00:02:46.045 [187/738] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:46.303 [188/738] Generating lib/rte_bbdev_mingw with a custom command 00:02:46.303 [189/738] Generating lib/rte_bitratestats_def with a custom command 00:02:46.303 [190/738] Generating lib/rte_bitratestats_mingw with a custom command 00:02:46.562 [191/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:46.562 [192/738] Linking static target lib/librte_ethdev.a 00:02:46.562 [193/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:46.562 [194/738] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:46.562 [195/738] Linking static target lib/librte_bitratestats.a 00:02:46.562 [196/738] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:46.562 [197/738] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.819 [198/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:46.819 [199/738] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:46.819 [200/738] Linking static target lib/librte_bbdev.a 00:02:46.819 [201/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:47.077 [202/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:47.077 [203/738] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:47.077 [204/738] Linking static target lib/librte_hash.a 00:02:47.077 [205/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:47.077 [206/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:47.336 [207/738] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.336 [208/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:47.336 [209/738] Generating lib/rte_bpf_def with a custom command 00:02:47.336 [210/738] Generating lib/rte_bpf_mingw with a custom command 00:02:47.594 [211/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:47.594 [212/738] Generating lib/rte_cfgfile_def with a custom command 00:02:47.594 [213/738] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:47.594 [214/738] Generating lib/rte_cfgfile_mingw with a custom command 00:02:47.594 [215/738] Linking static target lib/librte_cfgfile.a 00:02:47.594 [216/738] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.594 [217/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:47.594 [218/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:47.852 [219/738] Generating lib/rte_compressdev_def with a custom command 00:02:47.852 [220/738] Generating lib/rte_compressdev_mingw with a custom command 00:02:47.852 [221/738] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.852 [222/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:47.852 [223/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:47.852 [224/738] Generating lib/rte_cryptodev_def with a custom command 00:02:47.852 [225/738] Generating lib/rte_cryptodev_mingw with a custom command 00:02:47.852 [226/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:48.110 [227/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:48.110 [228/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:48.110 [229/738] Linking static target lib/librte_compressdev.a 00:02:48.110 [230/738] Linking static target lib/librte_acl.a 00:02:48.110 [231/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:48.110 [232/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:48.110 [233/738] Linking static target lib/librte_bpf.a 00:02:48.110 [234/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:48.110 [235/738] Generating lib/rte_distributor_def with a custom command 00:02:48.110 [236/738] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.110 [237/738] Generating lib/rte_distributor_mingw with a custom command 00:02:48.368 [238/738] Generating lib/rte_efd_def with a custom command 00:02:48.368 [239/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:48.368 [240/738] Generating lib/rte_efd_mingw with a custom command 00:02:48.368 [241/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:48.368 [242/738] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.368 [243/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:48.368 [244/738] Linking static target lib/librte_distributor.a 00:02:48.368 [245/738] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.626 [246/738] Linking target lib/librte_eal.so.23.0 00:02:48.626 [247/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:48.626 [248/738] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:48.626 [249/738] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.626 [250/738] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.626 [251/738] Linking target lib/librte_ring.so.23.0 00:02:48.626 [252/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:48.626 [253/738] Linking target lib/librte_meter.so.23.0 00:02:48.626 [254/738] Linking target lib/librte_pci.so.23.0 00:02:48.626 [255/738] Linking target lib/librte_timer.so.23.0 00:02:48.626 [256/738] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:48.626 [257/738] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:48.626 [258/738] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:48.626 [259/738] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:48.626 [260/738] Linking target lib/librte_rcu.so.23.0 00:02:48.626 [261/738] Linking target lib/librte_mempool.so.23.0 00:02:48.884 [262/738] Linking target lib/librte_cfgfile.so.23.0 00:02:48.884 [263/738] Linking target lib/librte_acl.so.23.0 00:02:48.884 [264/738] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:48.884 [265/738] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:48.884 [266/738] Linking target lib/librte_mbuf.so.23.0 00:02:48.884 [267/738] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:48.884 [268/738] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:48.884 [269/738] Linking target lib/librte_net.so.23.0 00:02:48.884 [270/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:49.141 [271/738] Linking target lib/librte_bbdev.so.23.0 00:02:49.141 [272/738] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:49.141 [273/738] Linking target lib/librte_compressdev.so.23.0 00:02:49.141 [274/738] Linking target lib/librte_cmdline.so.23.0 00:02:49.141 [275/738] Linking target lib/librte_hash.so.23.0 00:02:49.141 [276/738] Linking target lib/librte_distributor.so.23.0 00:02:49.141 [277/738] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:49.141 [278/738] Linking static target lib/librte_efd.a 00:02:49.141 [279/738] Generating lib/rte_eventdev_def with a custom command 00:02:49.141 [280/738] Generating lib/rte_eventdev_mingw with a custom command 00:02:49.141 [281/738] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:49.141 [282/738] Generating lib/rte_gpudev_def with a custom command 00:02:49.141 [283/738] Generating lib/rte_gpudev_mingw with a custom command 00:02:49.399 [284/738] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.399 [285/738] Linking target lib/librte_efd.so.23.0 00:02:49.399 [286/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:49.399 [287/738] Linking static target lib/librte_cryptodev.a 00:02:49.399 [288/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:49.399 [289/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:49.656 [290/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:49.656 [291/738] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.656 [292/738] Linking target lib/librte_ethdev.so.23.0 00:02:49.656 [293/738] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:49.656 [294/738] Generating lib/rte_gro_def with a custom command 00:02:49.656 [295/738] Generating lib/rte_gro_mingw with a custom command 00:02:49.656 [296/738] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:49.657 [297/738] Linking target lib/librte_metrics.so.23.0 00:02:49.657 [298/738] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:49.913 [299/738] Linking target lib/librte_bpf.so.23.0 00:02:49.913 [300/738] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:49.913 [301/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:49.913 [302/738] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:49.913 [303/738] Linking static target lib/librte_gpudev.a 00:02:49.913 [304/738] Linking target lib/librte_bitratestats.so.23.0 00:02:49.913 [305/738] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:49.913 [306/738] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:49.913 [307/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:49.913 [308/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:49.913 [309/738] Linking static target lib/librte_gro.a 00:02:50.171 [310/738] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:50.171 [311/738] Generating lib/rte_gso_def with a custom command 00:02:50.171 [312/738] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.171 [313/738] Generating lib/rte_gso_mingw with a custom command 00:02:50.171 [314/738] Linking target lib/librte_gro.so.23.0 00:02:50.171 [315/738] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:50.171 [316/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:50.171 [317/738] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:50.171 [318/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:50.171 [319/738] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:50.171 [320/738] Linking static target lib/librte_gso.a 00:02:50.428 [321/738] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.428 [322/738] Linking target lib/librte_gpudev.so.23.0 00:02:50.428 [323/738] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.428 [324/738] Generating lib/rte_ip_frag_def with a custom command 00:02:50.428 [325/738] Linking target lib/librte_gso.so.23.0 00:02:50.428 [326/738] Generating lib/rte_ip_frag_mingw with a custom command 00:02:50.428 [327/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:50.428 [328/738] Generating lib/rte_jobstats_def with a custom command 00:02:50.428 [329/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:50.428 [330/738] Generating lib/rte_jobstats_mingw with a custom command 00:02:50.428 [331/738] Linking static target lib/librte_eventdev.a 00:02:50.428 [332/738] Generating lib/rte_latencystats_def with a custom command 00:02:50.428 [333/738] Generating lib/rte_latencystats_mingw with a custom command 00:02:50.428 [334/738] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:50.428 [335/738] Linking static target lib/librte_jobstats.a 00:02:50.428 [336/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:50.687 [337/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:50.687 [338/738] Generating lib/rte_lpm_def with a custom command 00:02:50.687 [339/738] Generating lib/rte_lpm_mingw with a custom command 00:02:50.687 [340/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:50.687 [341/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:50.687 [342/738] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.687 [343/738] Linking target lib/librte_jobstats.so.23.0 00:02:50.687 [344/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:50.687 [345/738] Linking static target lib/librte_ip_frag.a 00:02:50.687 [346/738] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.687 [347/738] Linking target lib/librte_cryptodev.so.23.0 00:02:51.000 [348/738] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:51.000 [349/738] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:51.000 [350/738] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:51.000 [351/738] Linking static target lib/librte_latencystats.a 00:02:51.000 [352/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:51.000 [353/738] Generating lib/rte_member_def with a custom command 00:02:51.000 [354/738] Generating lib/rte_member_mingw with a custom command 00:02:51.000 [355/738] Generating lib/rte_pcapng_def with a custom command 00:02:51.000 [356/738] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.000 [357/738] Generating lib/rte_pcapng_mingw with a custom command 00:02:51.000 [358/738] Linking target lib/librte_ip_frag.so.23.0 00:02:51.000 [359/738] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.000 [360/738] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:51.000 [361/738] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:51.000 [362/738] Linking target lib/librte_latencystats.so.23.0 00:02:51.258 [363/738] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:51.258 [364/738] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:51.258 [365/738] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:51.258 [366/738] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:51.258 [367/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:51.514 [368/738] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:51.515 [369/738] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:51.515 [370/738] Generating lib/rte_power_def with a custom command 00:02:51.515 [371/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:51.515 [372/738] Linking static target lib/librte_lpm.a 00:02:51.515 [373/738] Generating lib/rte_power_mingw with a custom command 00:02:51.515 [374/738] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:51.515 [375/738] Generating lib/rte_rawdev_def with a custom command 00:02:51.515 [376/738] Generating lib/rte_rawdev_mingw with a custom command 00:02:51.515 [377/738] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:51.515 [378/738] Generating lib/rte_regexdev_def with a custom command 00:02:51.515 [379/738] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:51.515 [380/738] Linking static target lib/librte_pcapng.a 00:02:51.515 [381/738] Generating lib/rte_regexdev_mingw with a custom command 00:02:51.772 [382/738] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:51.772 [383/738] Generating lib/rte_dmadev_def with a custom command 00:02:51.772 [384/738] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.772 [385/738] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:51.772 [386/738] Generating lib/rte_dmadev_mingw with a custom command 00:02:51.772 [387/738] Linking target lib/librte_lpm.so.23.0 00:02:51.772 [388/738] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:51.772 [389/738] Linking static target lib/librte_rawdev.a 00:02:51.772 [390/738] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.772 [391/738] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:51.772 [392/738] Linking static target lib/librte_power.a 00:02:51.772 [393/738] Linking target lib/librte_pcapng.so.23.0 00:02:51.772 [394/738] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:51.772 [395/738] Generating lib/rte_rib_def with a custom command 00:02:51.772 [396/738] Generating lib/rte_rib_mingw with a custom command 00:02:51.772 [397/738] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.772 [398/738] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:52.030 [399/738] Linking static target lib/librte_dmadev.a 00:02:52.030 [400/738] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:52.030 [401/738] Linking target lib/librte_eventdev.so.23.0 00:02:52.030 [402/738] Generating lib/rte_reorder_def with a custom command 00:02:52.030 [403/738] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:52.030 [404/738] Linking static target lib/librte_regexdev.a 00:02:52.030 [405/738] Generating lib/rte_reorder_mingw with a custom command 00:02:52.030 [406/738] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:52.030 [407/738] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.030 [408/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:52.030 [409/738] Linking static target lib/librte_member.a 00:02:52.030 [410/738] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:52.030 [411/738] Linking target lib/librte_rawdev.so.23.0 00:02:52.030 [412/738] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:52.287 [413/738] Generating lib/rte_sched_def with a custom command 00:02:52.287 [414/738] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:52.287 [415/738] Generating lib/rte_sched_mingw with a custom command 00:02:52.287 [416/738] Generating lib/rte_security_def with a custom command 00:02:52.287 [417/738] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.287 [418/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:52.287 [419/738] Generating lib/rte_security_mingw with a custom command 00:02:52.287 [420/738] Linking target lib/librte_dmadev.so.23.0 00:02:52.287 [421/738] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:52.287 [422/738] Linking static target lib/librte_reorder.a 00:02:52.287 [423/738] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.287 [424/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:52.287 [425/738] Linking target lib/librte_member.so.23.0 00:02:52.287 [426/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:52.287 [427/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:52.287 [428/738] Linking static target lib/librte_rib.a 00:02:52.287 [429/738] Generating lib/rte_stack_def with a custom command 00:02:52.287 [430/738] Generating lib/rte_stack_mingw with a custom command 00:02:52.287 [431/738] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:52.287 [432/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:52.287 [433/738] Linking static target lib/librte_stack.a 00:02:52.545 [434/738] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.545 [435/738] Linking target lib/librte_regexdev.so.23.0 00:02:52.545 [436/738] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.545 [437/738] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.545 [438/738] Linking target lib/librte_reorder.so.23.0 00:02:52.545 [439/738] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:52.545 [440/738] Linking target lib/librte_power.so.23.0 00:02:52.545 [441/738] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.545 [442/738] Linking target lib/librte_stack.so.23.0 00:02:52.802 [443/738] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.802 [444/738] Linking target lib/librte_rib.so.23.0 00:02:52.802 [445/738] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:52.802 [446/738] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:52.802 [447/738] Linking static target lib/librte_security.a 00:02:52.802 [448/738] Generating lib/rte_vhost_def with a custom command 00:02:52.802 [449/738] Generating lib/rte_vhost_mingw with a custom command 00:02:52.802 [450/738] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:52.802 [451/738] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:52.802 [452/738] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:53.060 [453/738] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.060 [454/738] Linking target lib/librte_security.so.23.0 00:02:53.060 [455/738] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:53.060 [456/738] Linking static target lib/librte_sched.a 00:02:53.060 [457/738] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:53.317 [458/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:53.317 [459/738] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:53.317 [460/738] Generating lib/rte_ipsec_def with a custom command 00:02:53.317 [461/738] Generating lib/rte_ipsec_mingw with a custom command 00:02:53.317 [462/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:53.317 [463/738] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.574 [464/738] Linking target lib/librte_sched.so.23.0 00:02:53.574 [465/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:53.574 [466/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:53.574 [467/738] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:53.574 [468/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:53.574 [469/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:53.574 [470/738] Generating lib/rte_fib_def with a custom command 00:02:53.574 [471/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:53.574 [472/738] Generating lib/rte_fib_mingw with a custom command 00:02:53.831 [473/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:54.088 [474/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:54.088 [475/738] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:54.088 [476/738] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:54.088 [477/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:54.088 [478/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:54.088 [479/738] Linking static target lib/librte_fib.a 00:02:54.088 [480/738] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:54.088 [481/738] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:54.088 [482/738] Linking static target lib/librte_ipsec.a 00:02:54.345 [483/738] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:54.345 [484/738] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:54.345 [485/738] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.345 [486/738] Linking target lib/librte_fib.so.23.0 00:02:54.345 [487/738] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.602 [488/738] Linking target lib/librte_ipsec.so.23.0 00:02:54.602 [489/738] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:54.859 [490/738] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:54.859 [491/738] Generating lib/rte_port_def with a custom command 00:02:54.859 [492/738] Generating lib/rte_port_mingw with a custom command 00:02:54.859 [493/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:54.859 [494/738] Generating lib/rte_pdump_def with a custom command 00:02:54.859 [495/738] Generating lib/rte_pdump_mingw with a custom command 00:02:54.859 [496/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:54.859 [497/738] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:55.116 [498/738] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:55.116 [499/738] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:55.116 [500/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:55.116 [501/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:55.116 [502/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:55.116 [503/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:55.373 [504/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:55.373 [505/738] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:55.373 [506/738] Linking static target lib/librte_port.a 00:02:55.373 [507/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:55.373 [508/738] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:55.373 [509/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:55.373 [510/738] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:55.373 [511/738] Linking static target lib/librte_pdump.a 00:02:55.630 [512/738] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:55.630 [513/738] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.630 [514/738] Linking target lib/librte_pdump.so.23.0 00:02:55.630 [515/738] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.888 [516/738] Linking target lib/librte_port.so.23.0 00:02:55.888 [517/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:55.888 [518/738] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:55.888 [519/738] Generating lib/rte_table_def with a custom command 00:02:55.888 [520/738] Generating lib/rte_table_mingw with a custom command 00:02:55.888 [521/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:55.888 [522/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:55.888 [523/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:56.145 [524/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:56.145 [525/738] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:56.145 [526/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:56.145 [527/738] Generating lib/rte_pipeline_def with a custom command 00:02:56.145 [528/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:56.145 [529/738] Linking static target lib/librte_table.a 00:02:56.145 [530/738] Generating lib/rte_pipeline_mingw with a custom command 00:02:56.145 [531/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:56.402 [532/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:56.402 [533/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:56.402 [534/738] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:56.661 [535/738] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:56.661 [536/738] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.661 [537/738] Linking target lib/librte_table.so.23.0 00:02:56.661 [538/738] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:56.661 [539/738] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:56.661 [540/738] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:56.661 [541/738] Generating lib/rte_graph_def with a custom command 00:02:56.661 [542/738] Generating lib/rte_graph_mingw with a custom command 00:02:56.919 [543/738] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:56.919 [544/738] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:56.919 [545/738] Linking static target lib/librte_graph.a 00:02:56.919 [546/738] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:56.919 [547/738] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:57.177 [548/738] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:57.177 [549/738] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:57.177 [550/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:57.177 [551/738] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:57.177 [552/738] Generating lib/rte_node_def with a custom command 00:02:57.435 [553/738] Generating lib/rte_node_mingw with a custom command 00:02:57.435 [554/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:57.435 [555/738] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:57.435 [556/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:57.435 [557/738] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.435 [558/738] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:57.435 [559/738] Linking target lib/librte_graph.so.23.0 00:02:57.435 [560/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:57.435 [561/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:57.435 [562/738] Generating drivers/rte_bus_pci_def with a custom command 00:02:57.694 [563/738] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:57.694 [564/738] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:57.694 [565/738] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:57.694 [566/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:57.694 [567/738] Generating drivers/rte_bus_vdev_def with a custom command 00:02:57.694 [568/738] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:57.694 [569/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:57.694 [570/738] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:57.694 [571/738] Generating drivers/rte_mempool_ring_def with a custom command 00:02:57.694 [572/738] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:57.694 [573/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:57.694 [574/738] Linking static target lib/librte_node.a 00:02:57.694 [575/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:57.694 [576/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:57.694 [577/738] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:57.694 [578/738] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:58.000 [579/738] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:58.000 [580/738] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.000 [581/738] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:58.000 [582/738] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:58.000 [583/738] Linking static target drivers/librte_bus_vdev.a 00:02:58.000 [584/738] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:58.000 [585/738] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:58.000 [586/738] Linking static target drivers/librte_bus_pci.a 00:02:58.000 [587/738] Linking target lib/librte_node.so.23.0 00:02:58.000 [588/738] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:58.000 [589/738] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.276 [590/738] Linking target drivers/librte_bus_vdev.so.23.0 00:02:58.276 [591/738] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.276 [592/738] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:58.276 [593/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:58.276 [594/738] Linking target drivers/librte_bus_pci.so.23.0 00:02:58.276 [595/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:58.276 [596/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:58.276 [597/738] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:58.276 [598/738] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:58.276 [599/738] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:58.533 [600/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:58.533 [601/738] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:58.533 [602/738] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.533 [603/738] Linking static target drivers/librte_mempool_ring.a 00:02:58.534 [604/738] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.534 [605/738] Linking target drivers/librte_mempool_ring.so.23.0 00:02:58.534 [606/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:58.792 [607/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:59.048 [608/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:59.048 [609/738] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:59.048 [610/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:59.306 [611/738] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:59.306 [612/738] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:59.564 [613/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:59.564 [614/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:59.564 [615/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:59.822 [616/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:59.822 [617/738] Generating drivers/rte_net_i40e_def with a custom command 00:02:59.822 [618/738] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:59.822 [619/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:00.388 [620/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:00.388 [621/738] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:00.645 [622/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:00.645 [623/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:00.645 [624/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:00.645 [625/738] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:00.645 [626/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:00.902 [627/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:00.902 [628/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:00.902 [629/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:00.902 [630/738] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:00.902 [631/738] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:01.161 [632/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:01.419 [633/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:01.419 [634/738] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:01.419 [635/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:01.419 [636/738] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:01.419 [637/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:01.419 [638/738] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:01.419 [639/738] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:01.678 [640/738] Linking static target drivers/librte_net_i40e.a 00:03:01.678 [641/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:01.678 [642/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:01.678 [643/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:01.678 [644/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:01.678 [645/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:01.936 [646/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:01.936 [647/738] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.936 [648/738] Linking target drivers/librte_net_i40e.so.23.0 00:03:02.194 [649/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:02.194 [650/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:02.194 [651/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:02.194 [652/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:02.194 [653/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:02.194 [654/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:02.194 [655/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:02.453 [656/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:02.453 [657/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:02.453 [658/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:02.453 [659/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:02.711 [660/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:02.711 [661/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:02.711 [662/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:02.711 [663/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:02.969 [664/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:02.969 [665/738] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:02.969 [666/738] Linking static target lib/librte_vhost.a 00:03:03.227 [667/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:03.227 [668/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:03.227 [669/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:03.486 [670/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:03.486 [671/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:03.486 [672/738] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:03.486 [673/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:03.744 [674/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:03.744 [675/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:03.744 [676/738] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.744 [677/738] Linking target lib/librte_vhost.so.23.0 00:03:03.744 [678/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:03.744 [679/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:03.744 [680/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:03.744 [681/738] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:04.002 [682/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:04.002 [683/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:04.260 [684/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:04.260 [685/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:04.260 [686/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:04.260 [687/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:04.260 [688/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:04.260 [689/738] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:04.519 [690/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:04.519 [691/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:04.519 [692/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:04.519 [693/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:04.777 [694/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:04.777 [695/738] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:05.036 [696/738] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:05.036 [697/738] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:05.036 [698/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:05.294 [699/738] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:05.294 [700/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:05.294 [701/738] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:05.294 [702/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:05.552 [703/738] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:05.552 [704/738] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:05.552 [705/738] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:05.810 [706/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:05.810 [707/738] Linking static target lib/librte_pipeline.a 00:03:05.810 [708/738] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:05.810 [709/738] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:06.068 [710/738] Linking target app/dpdk-dumpcap 00:03:06.068 [711/738] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:06.068 [712/738] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:06.068 [713/738] Linking target app/dpdk-pdump 00:03:06.326 [714/738] Linking target app/dpdk-proc-info 00:03:06.326 [715/738] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:06.326 [716/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:06.326 [717/738] Linking target app/dpdk-test-acl 00:03:06.326 [718/738] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:06.326 [719/738] Linking target app/dpdk-test-bbdev 00:03:06.326 [720/738] Linking target app/dpdk-test-cmdline 00:03:06.584 [721/738] Linking target app/dpdk-test-compress-perf 00:03:06.584 [722/738] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:06.584 [723/738] Linking target app/dpdk-test-crypto-perf 00:03:06.584 [724/738] Linking target app/dpdk-test-eventdev 00:03:06.585 [725/738] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:06.585 [726/738] Linking target app/dpdk-test-flow-perf 00:03:06.585 [727/738] Linking target app/dpdk-test-gpudev 00:03:06.585 [728/738] Linking target app/dpdk-test-pipeline 00:03:06.585 [729/738] Linking target app/dpdk-test-fib 00:03:06.585 [730/738] Linking target app/dpdk-test-regex 00:03:06.843 [731/738] Linking target app/dpdk-testpmd 00:03:06.843 [732/738] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:06.843 [733/738] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:07.409 [734/738] Linking target app/dpdk-test-sad 00:03:07.409 [735/738] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:07.409 [736/738] Linking target app/dpdk-test-security-perf 00:03:08.344 [737/738] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.602 [738/738] Linking target lib/librte_pipeline.so.23.0 00:03:08.602 01:02:41 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:08.602 01:02:41 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:08.602 01:02:41 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:08.602 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:08.602 [0/1] Installing files. 00:03:08.863 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.863 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.865 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.866 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:08.867 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:08.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:08.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:08.868 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:08.868 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:08.868 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:08.868 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.868 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:08.868 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.868 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.868 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.868 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.868 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.868 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.868 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.868 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.868 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.164 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.164 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.164 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.164 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.164 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.164 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.164 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.164 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.164 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.165 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.166 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:09.167 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:09.167 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:09.167 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:09.167 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:09.167 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:09.167 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:09.167 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:09.167 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:09.167 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:09.167 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:09.167 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:09.167 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:09.167 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:09.167 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:09.167 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:09.167 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:09.167 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:09.167 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:09.167 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:09.167 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:09.167 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:09.167 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:09.167 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:09.167 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:09.167 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:09.167 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:09.167 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:09.167 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:09.167 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:09.167 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:09.167 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:09.167 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:09.167 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:09.167 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:09.167 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:09.167 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:09.167 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:09.167 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:09.167 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:09.167 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:09.167 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:09.167 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:09.167 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:09.167 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:09.167 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:09.167 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:09.167 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:09.167 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:09.167 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:09.167 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:09.167 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:09.167 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:09.167 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:09.167 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:09.167 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:09.167 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:09.167 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:09.167 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:09.167 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:09.167 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:09.167 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:09.167 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:09.167 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:09.167 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:09.167 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:09.167 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:09.167 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:09.167 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:09.167 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:09.167 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:09.167 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:09.167 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:09.167 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:09.167 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:09.167 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:09.167 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:09.167 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:09.168 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:09.168 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:09.168 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:09.168 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:09.168 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:09.168 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:09.168 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:09.168 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:09.168 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:09.168 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:09.168 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:09.168 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:09.168 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:09.168 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:09.168 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:09.168 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:09.168 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:09.168 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:09.168 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:09.168 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:09.168 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:09.168 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:09.168 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:09.168 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:09.168 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:09.168 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:09.168 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:09.168 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:09.168 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:09.168 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:09.168 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:09.168 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:09.168 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:09.168 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:09.168 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:09.168 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:09.168 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:09.168 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:09.168 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:09.168 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:09.168 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:09.168 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:09.168 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:09.168 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:09.168 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:09.168 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:09.168 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:09.168 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:09.168 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:09.168 01:02:42 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:09.168 01:02:42 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:09.168 00:03:09.168 real 0m32.503s 00:03:09.168 user 3m36.555s 00:03:09.168 sys 0m32.820s 00:03:09.168 01:02:42 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:09.168 ************************************ 00:03:09.168 END TEST build_native_dpdk 00:03:09.168 ************************************ 00:03:09.168 01:02:42 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:09.168 01:02:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:09.168 01:02:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:09.168 01:02:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:09.168 01:02:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:09.168 01:02:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:09.168 01:02:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:09.168 01:02:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:09.168 01:02:42 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:09.168 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:09.426 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.426 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:09.426 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:09.426 Using 'verbs' RDMA provider 00:03:20.334 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:30.318 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:30.318 Creating mk/config.mk...done. 00:03:30.318 Creating mk/cc.flags.mk...done. 00:03:30.318 Type 'make' to build. 00:03:30.318 01:03:03 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:30.318 01:03:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:30.318 01:03:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:30.318 01:03:03 -- common/autotest_common.sh@10 -- $ set +x 00:03:30.318 ************************************ 00:03:30.318 START TEST make 00:03:30.318 ************************************ 00:03:30.318 01:03:03 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:30.318 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:30.318 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:30.318 meson setup builddir \ 00:03:30.318 -Dwith-libaio=enabled \ 00:03:30.318 -Dwith-liburing=enabled \ 00:03:30.318 -Dwith-libvfn=disabled \ 00:03:30.318 -Dwith-spdk=disabled \ 00:03:30.318 -Dexamples=false \ 00:03:30.318 -Dtests=false \ 00:03:30.318 -Dtools=false && \ 00:03:30.318 meson compile -C builddir && \ 00:03:30.318 cd -) 00:03:31.695 The Meson build system 00:03:31.695 Version: 1.5.0 00:03:31.695 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:31.695 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:31.695 Build type: native build 00:03:31.695 Project name: xnvme 00:03:31.695 Project version: 0.7.5 00:03:31.695 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:31.695 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:31.695 Host machine cpu family: x86_64 00:03:31.695 Host machine cpu: x86_64 00:03:31.695 Message: host_machine.system: linux 00:03:31.695 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:31.695 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:31.695 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:31.695 Run-time dependency threads found: YES 00:03:31.695 Has header "setupapi.h" : NO 00:03:31.695 Has header "linux/blkzoned.h" : YES 00:03:31.695 Has header "linux/blkzoned.h" : YES (cached) 00:03:31.695 Has header "libaio.h" : YES 00:03:31.695 Library aio found: YES 00:03:31.695 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:31.695 Run-time dependency liburing found: YES 2.2 00:03:31.695 Dependency libvfn skipped: feature with-libvfn disabled 00:03:31.695 Found CMake: /usr/bin/cmake (3.27.7) 00:03:31.695 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:31.695 Subproject spdk : skipped: feature with-spdk disabled 00:03:31.695 Run-time dependency appleframeworks found: NO (tried framework) 00:03:31.695 Run-time dependency appleframeworks found: NO (tried framework) 00:03:31.695 Library rt found: YES 00:03:31.695 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:31.695 Configuring xnvme_config.h using configuration 00:03:31.695 Configuring xnvme.spec using configuration 00:03:31.695 Run-time dependency bash-completion found: YES 2.11 00:03:31.695 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:31.695 Program cp found: YES (/usr/bin/cp) 00:03:31.695 Build targets in project: 3 00:03:31.695 00:03:31.695 xnvme 0.7.5 00:03:31.695 00:03:31.695 Subprojects 00:03:31.695 spdk : NO Feature 'with-spdk' disabled 00:03:31.695 00:03:31.695 User defined options 00:03:31.695 examples : false 00:03:31.695 tests : false 00:03:31.695 tools : false 00:03:31.695 with-libaio : enabled 00:03:31.695 with-liburing: enabled 00:03:31.695 with-libvfn : disabled 00:03:31.695 with-spdk : disabled 00:03:31.695 00:03:31.695 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:32.272 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:32.272 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:32.272 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:32.272 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:32.272 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:32.272 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:32.272 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:32.272 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:32.272 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:32.272 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:32.272 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:32.272 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:32.272 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:32.272 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:32.272 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:32.272 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:32.272 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:32.272 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:32.272 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:32.272 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:32.272 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:32.272 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:32.272 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:32.530 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:32.530 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:32.530 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:32.530 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:32.530 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:32.530 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:32.530 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:32.530 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:32.530 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:32.530 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:32.530 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:32.530 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:32.530 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:32.530 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:32.530 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:32.530 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:32.530 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:32.530 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:32.530 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:32.530 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:32.530 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:32.530 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:32.530 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:32.530 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:32.530 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:32.530 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:32.530 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:32.530 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:32.530 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:32.530 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:32.530 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:32.530 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:32.530 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:32.530 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:32.530 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:32.788 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:32.788 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:32.788 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:32.788 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:32.788 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:32.788 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:32.788 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:32.788 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:32.788 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:32.788 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:32.788 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:32.789 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:32.789 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:32.789 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:32.789 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:32.789 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:33.046 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:33.046 [75/76] Linking static target lib/libxnvme.a 00:03:33.303 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:33.303 INFO: autodetecting backend as ninja 00:03:33.303 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:33.303 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:05.374 CC lib/ut_mock/mock.o 00:04:05.374 CC lib/log/log.o 00:04:05.374 CC lib/log/log_deprecated.o 00:04:05.374 CC lib/log/log_flags.o 00:04:05.374 CC lib/ut/ut.o 00:04:05.374 LIB libspdk_ut.a 00:04:05.374 LIB libspdk_ut_mock.a 00:04:05.374 SO libspdk_ut.so.2.0 00:04:05.374 LIB libspdk_log.a 00:04:05.374 SO libspdk_ut_mock.so.6.0 00:04:05.374 SO libspdk_log.so.7.1 00:04:05.374 SYMLINK libspdk_ut.so 00:04:05.374 SYMLINK libspdk_ut_mock.so 00:04:05.374 SYMLINK libspdk_log.so 00:04:05.374 CC lib/dma/dma.o 00:04:05.374 CXX lib/trace_parser/trace.o 00:04:05.374 CC lib/util/base64.o 00:04:05.374 CC lib/util/bit_array.o 00:04:05.374 CC lib/util/cpuset.o 00:04:05.374 CC lib/util/crc16.o 00:04:05.374 CC lib/util/crc32.o 00:04:05.374 CC lib/util/crc32c.o 00:04:05.374 CC lib/ioat/ioat.o 00:04:05.374 CC lib/vfio_user/host/vfio_user_pci.o 00:04:05.374 CC lib/util/crc32_ieee.o 00:04:05.374 CC lib/util/crc64.o 00:04:05.374 CC lib/util/dif.o 00:04:05.374 CC lib/vfio_user/host/vfio_user.o 00:04:05.374 CC lib/util/fd.o 00:04:05.374 CC lib/util/fd_group.o 00:04:05.374 LIB libspdk_dma.a 00:04:05.374 SO libspdk_dma.so.5.0 00:04:05.374 LIB libspdk_ioat.a 00:04:05.374 CC lib/util/file.o 00:04:05.374 CC lib/util/hexlify.o 00:04:05.374 SO libspdk_ioat.so.7.0 00:04:05.374 SYMLINK libspdk_dma.so 00:04:05.374 CC lib/util/iov.o 00:04:05.374 CC lib/util/math.o 00:04:05.374 SYMLINK libspdk_ioat.so 00:04:05.374 CC lib/util/net.o 00:04:05.374 CC lib/util/pipe.o 00:04:05.374 CC lib/util/strerror_tls.o 00:04:05.374 CC lib/util/string.o 00:04:05.374 LIB libspdk_vfio_user.a 00:04:05.374 SO libspdk_vfio_user.so.5.0 00:04:05.374 CC lib/util/uuid.o 00:04:05.374 CC lib/util/xor.o 00:04:05.374 SYMLINK libspdk_vfio_user.so 00:04:05.374 CC lib/util/zipf.o 00:04:05.374 CC lib/util/md5.o 00:04:05.374 LIB libspdk_util.a 00:04:05.374 SO libspdk_util.so.10.1 00:04:05.374 LIB libspdk_trace_parser.a 00:04:05.374 SO libspdk_trace_parser.so.6.0 00:04:05.374 SYMLINK libspdk_util.so 00:04:05.374 SYMLINK libspdk_trace_parser.so 00:04:05.374 CC lib/conf/conf.o 00:04:05.374 CC lib/json/json_parse.o 00:04:05.374 CC lib/json/json_util.o 00:04:05.374 CC lib/json/json_write.o 00:04:05.374 CC lib/rdma_utils/rdma_utils.o 00:04:05.374 CC lib/idxd/idxd.o 00:04:05.374 CC lib/idxd/idxd_user.o 00:04:05.374 CC lib/idxd/idxd_kernel.o 00:04:05.374 CC lib/env_dpdk/env.o 00:04:05.374 CC lib/vmd/vmd.o 00:04:05.374 CC lib/vmd/led.o 00:04:05.374 LIB libspdk_conf.a 00:04:05.374 CC lib/env_dpdk/memory.o 00:04:05.374 CC lib/env_dpdk/pci.o 00:04:05.374 SO libspdk_conf.so.6.0 00:04:05.374 CC lib/env_dpdk/init.o 00:04:05.374 LIB libspdk_rdma_utils.a 00:04:05.374 LIB libspdk_json.a 00:04:05.374 SYMLINK libspdk_conf.so 00:04:05.374 CC lib/env_dpdk/threads.o 00:04:05.374 CC lib/env_dpdk/pci_ioat.o 00:04:05.374 SO libspdk_rdma_utils.so.1.0 00:04:05.374 SO libspdk_json.so.6.0 00:04:05.374 SYMLINK libspdk_rdma_utils.so 00:04:05.374 CC lib/env_dpdk/pci_virtio.o 00:04:05.374 SYMLINK libspdk_json.so 00:04:05.374 CC lib/env_dpdk/pci_vmd.o 00:04:05.374 CC lib/env_dpdk/pci_idxd.o 00:04:05.374 CC lib/env_dpdk/pci_event.o 00:04:05.374 CC lib/env_dpdk/sigbus_handler.o 00:04:05.374 CC lib/rdma_provider/common.o 00:04:05.374 CC lib/env_dpdk/pci_dpdk.o 00:04:05.374 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:05.374 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:05.374 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:05.374 LIB libspdk_idxd.a 00:04:05.374 SO libspdk_idxd.so.12.1 00:04:05.374 SYMLINK libspdk_idxd.so 00:04:05.374 LIB libspdk_vmd.a 00:04:05.374 SO libspdk_vmd.so.6.0 00:04:05.374 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:05.374 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:05.374 CC lib/jsonrpc/jsonrpc_server.o 00:04:05.374 CC lib/jsonrpc/jsonrpc_client.o 00:04:05.374 SYMLINK libspdk_vmd.so 00:04:05.374 LIB libspdk_rdma_provider.a 00:04:05.374 SO libspdk_rdma_provider.so.7.0 00:04:05.374 SYMLINK libspdk_rdma_provider.so 00:04:05.374 LIB libspdk_jsonrpc.a 00:04:05.374 SO libspdk_jsonrpc.so.6.0 00:04:05.374 SYMLINK libspdk_jsonrpc.so 00:04:05.374 CC lib/rpc/rpc.o 00:04:05.635 LIB libspdk_env_dpdk.a 00:04:05.635 SO libspdk_env_dpdk.so.15.1 00:04:05.635 LIB libspdk_rpc.a 00:04:05.635 SO libspdk_rpc.so.6.0 00:04:05.635 SYMLINK libspdk_env_dpdk.so 00:04:05.635 SYMLINK libspdk_rpc.so 00:04:05.894 CC lib/trace/trace.o 00:04:05.894 CC lib/trace/trace_flags.o 00:04:05.894 CC lib/trace/trace_rpc.o 00:04:05.894 CC lib/keyring/keyring.o 00:04:05.894 CC lib/keyring/keyring_rpc.o 00:04:05.894 CC lib/notify/notify.o 00:04:05.894 CC lib/notify/notify_rpc.o 00:04:06.153 LIB libspdk_notify.a 00:04:06.153 SO libspdk_notify.so.6.0 00:04:06.153 LIB libspdk_keyring.a 00:04:06.153 LIB libspdk_trace.a 00:04:06.153 SO libspdk_keyring.so.2.0 00:04:06.153 SYMLINK libspdk_notify.so 00:04:06.153 SO libspdk_trace.so.11.0 00:04:06.153 SYMLINK libspdk_keyring.so 00:04:06.153 SYMLINK libspdk_trace.so 00:04:06.413 CC lib/thread/thread.o 00:04:06.413 CC lib/thread/iobuf.o 00:04:06.413 CC lib/sock/sock.o 00:04:06.413 CC lib/sock/sock_rpc.o 00:04:06.981 LIB libspdk_sock.a 00:04:06.981 SO libspdk_sock.so.10.0 00:04:06.981 SYMLINK libspdk_sock.so 00:04:07.243 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:07.243 CC lib/nvme/nvme_fabric.o 00:04:07.243 CC lib/nvme/nvme_ctrlr.o 00:04:07.243 CC lib/nvme/nvme_ns_cmd.o 00:04:07.243 CC lib/nvme/nvme_ns.o 00:04:07.243 CC lib/nvme/nvme_pcie_common.o 00:04:07.243 CC lib/nvme/nvme_pcie.o 00:04:07.243 CC lib/nvme/nvme_qpair.o 00:04:07.243 CC lib/nvme/nvme.o 00:04:07.814 CC lib/nvme/nvme_quirks.o 00:04:07.814 CC lib/nvme/nvme_transport.o 00:04:07.814 CC lib/nvme/nvme_discovery.o 00:04:07.814 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:08.075 LIB libspdk_thread.a 00:04:08.075 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:08.075 CC lib/nvme/nvme_tcp.o 00:04:08.075 SO libspdk_thread.so.11.0 00:04:08.075 CC lib/nvme/nvme_opal.o 00:04:08.075 SYMLINK libspdk_thread.so 00:04:08.075 CC lib/nvme/nvme_io_msg.o 00:04:08.336 CC lib/nvme/nvme_poll_group.o 00:04:08.336 CC lib/nvme/nvme_zns.o 00:04:08.336 CC lib/nvme/nvme_stubs.o 00:04:08.336 CC lib/nvme/nvme_auth.o 00:04:08.596 CC lib/nvme/nvme_cuse.o 00:04:08.596 CC lib/nvme/nvme_rdma.o 00:04:08.856 CC lib/accel/accel.o 00:04:08.856 CC lib/blob/blobstore.o 00:04:08.856 CC lib/blob/request.o 00:04:08.856 CC lib/blob/zeroes.o 00:04:09.117 CC lib/init/json_config.o 00:04:09.117 CC lib/blob/blob_bs_dev.o 00:04:09.378 CC lib/init/subsystem.o 00:04:09.378 CC lib/init/subsystem_rpc.o 00:04:09.378 CC lib/init/rpc.o 00:04:09.378 CC lib/accel/accel_rpc.o 00:04:09.378 CC lib/accel/accel_sw.o 00:04:09.378 LIB libspdk_init.a 00:04:09.638 SO libspdk_init.so.6.0 00:04:09.638 CC lib/virtio/virtio.o 00:04:09.638 CC lib/fsdev/fsdev.o 00:04:09.638 CC lib/virtio/virtio_vhost_user.o 00:04:09.638 SYMLINK libspdk_init.so 00:04:09.638 CC lib/virtio/virtio_vfio_user.o 00:04:09.638 CC lib/virtio/virtio_pci.o 00:04:09.638 CC lib/fsdev/fsdev_io.o 00:04:09.638 CC lib/fsdev/fsdev_rpc.o 00:04:09.896 LIB libspdk_virtio.a 00:04:09.896 SO libspdk_virtio.so.7.0 00:04:09.896 CC lib/event/app.o 00:04:09.896 CC lib/event/reactor.o 00:04:09.896 CC lib/event/log_rpc.o 00:04:09.896 CC lib/event/app_rpc.o 00:04:09.896 CC lib/event/scheduler_static.o 00:04:10.154 SYMLINK libspdk_virtio.so 00:04:10.154 LIB libspdk_accel.a 00:04:10.154 SO libspdk_accel.so.16.0 00:04:10.154 LIB libspdk_nvme.a 00:04:10.154 SYMLINK libspdk_accel.so 00:04:10.154 LIB libspdk_fsdev.a 00:04:10.154 SO libspdk_fsdev.so.2.0 00:04:10.154 SO libspdk_nvme.so.15.0 00:04:10.154 SYMLINK libspdk_fsdev.so 00:04:10.412 CC lib/bdev/bdev_rpc.o 00:04:10.412 CC lib/bdev/bdev.o 00:04:10.412 CC lib/bdev/bdev_zone.o 00:04:10.412 CC lib/bdev/part.o 00:04:10.412 CC lib/bdev/scsi_nvme.o 00:04:10.412 SYMLINK libspdk_nvme.so 00:04:10.412 LIB libspdk_event.a 00:04:10.412 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:10.412 SO libspdk_event.so.14.0 00:04:10.412 SYMLINK libspdk_event.so 00:04:10.991 LIB libspdk_fuse_dispatcher.a 00:04:10.991 SO libspdk_fuse_dispatcher.so.1.0 00:04:10.991 SYMLINK libspdk_fuse_dispatcher.so 00:04:11.924 LIB libspdk_blob.a 00:04:11.924 SO libspdk_blob.so.12.0 00:04:11.924 SYMLINK libspdk_blob.so 00:04:12.182 CC lib/lvol/lvol.o 00:04:12.182 CC lib/blobfs/blobfs.o 00:04:12.182 CC lib/blobfs/tree.o 00:04:13.115 LIB libspdk_bdev.a 00:04:13.115 LIB libspdk_blobfs.a 00:04:13.115 SO libspdk_bdev.so.17.0 00:04:13.115 SO libspdk_blobfs.so.11.0 00:04:13.115 LIB libspdk_lvol.a 00:04:13.115 SYMLINK libspdk_blobfs.so 00:04:13.115 SO libspdk_lvol.so.11.0 00:04:13.115 SYMLINK libspdk_bdev.so 00:04:13.373 SYMLINK libspdk_lvol.so 00:04:13.373 CC lib/nvmf/ctrlr.o 00:04:13.373 CC lib/nvmf/ctrlr_discovery.o 00:04:13.373 CC lib/nvmf/ctrlr_bdev.o 00:04:13.373 CC lib/nvmf/subsystem.o 00:04:13.373 CC lib/nvmf/nvmf.o 00:04:13.373 CC lib/nvmf/nvmf_rpc.o 00:04:13.373 CC lib/nbd/nbd.o 00:04:13.373 CC lib/ublk/ublk.o 00:04:13.373 CC lib/ftl/ftl_core.o 00:04:13.373 CC lib/scsi/dev.o 00:04:13.631 CC lib/scsi/lun.o 00:04:13.631 CC lib/ftl/ftl_init.o 00:04:13.631 CC lib/nbd/nbd_rpc.o 00:04:13.631 CC lib/scsi/port.o 00:04:13.631 CC lib/ftl/ftl_layout.o 00:04:13.889 CC lib/ftl/ftl_debug.o 00:04:13.889 CC lib/ublk/ublk_rpc.o 00:04:13.889 CC lib/scsi/scsi.o 00:04:13.889 LIB libspdk_nbd.a 00:04:13.889 SO libspdk_nbd.so.7.0 00:04:13.889 LIB libspdk_ublk.a 00:04:13.889 CC lib/scsi/scsi_bdev.o 00:04:13.889 SYMLINK libspdk_nbd.so 00:04:13.889 CC lib/nvmf/transport.o 00:04:13.889 SO libspdk_ublk.so.3.0 00:04:13.889 CC lib/scsi/scsi_pr.o 00:04:13.889 CC lib/ftl/ftl_io.o 00:04:13.889 CC lib/scsi/scsi_rpc.o 00:04:14.147 SYMLINK libspdk_ublk.so 00:04:14.147 CC lib/scsi/task.o 00:04:14.147 CC lib/ftl/ftl_sb.o 00:04:14.147 CC lib/nvmf/tcp.o 00:04:14.147 CC lib/ftl/ftl_l2p.o 00:04:14.147 CC lib/nvmf/stubs.o 00:04:14.147 CC lib/nvmf/mdns_server.o 00:04:14.147 CC lib/nvmf/rdma.o 00:04:14.404 CC lib/nvmf/auth.o 00:04:14.404 CC lib/ftl/ftl_l2p_flat.o 00:04:14.404 LIB libspdk_scsi.a 00:04:14.404 SO libspdk_scsi.so.9.0 00:04:14.404 SYMLINK libspdk_scsi.so 00:04:14.404 CC lib/ftl/ftl_nv_cache.o 00:04:14.404 CC lib/ftl/ftl_band.o 00:04:14.662 CC lib/ftl/ftl_band_ops.o 00:04:14.662 CC lib/ftl/ftl_writer.o 00:04:14.662 CC lib/iscsi/conn.o 00:04:14.662 CC lib/vhost/vhost.o 00:04:14.920 CC lib/ftl/ftl_rq.o 00:04:14.920 CC lib/ftl/ftl_reloc.o 00:04:14.920 CC lib/ftl/ftl_l2p_cache.o 00:04:14.920 CC lib/iscsi/init_grp.o 00:04:14.920 CC lib/ftl/ftl_p2l.o 00:04:15.180 CC lib/ftl/ftl_p2l_log.o 00:04:15.180 CC lib/ftl/mngt/ftl_mngt.o 00:04:15.180 CC lib/vhost/vhost_rpc.o 00:04:15.180 CC lib/iscsi/iscsi.o 00:04:15.180 CC lib/iscsi/param.o 00:04:15.439 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:15.439 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:15.439 CC lib/iscsi/portal_grp.o 00:04:15.439 CC lib/iscsi/tgt_node.o 00:04:15.439 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:15.439 CC lib/iscsi/iscsi_subsystem.o 00:04:15.439 CC lib/iscsi/iscsi_rpc.o 00:04:15.697 CC lib/iscsi/task.o 00:04:15.697 CC lib/vhost/vhost_scsi.o 00:04:15.697 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:15.697 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:15.697 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:15.697 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:15.956 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:15.956 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:15.956 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:15.956 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:15.956 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:15.956 CC lib/vhost/vhost_blk.o 00:04:15.956 CC lib/vhost/rte_vhost_user.o 00:04:15.956 LIB libspdk_nvmf.a 00:04:15.956 CC lib/ftl/utils/ftl_conf.o 00:04:15.956 CC lib/ftl/utils/ftl_md.o 00:04:15.956 CC lib/ftl/utils/ftl_mempool.o 00:04:15.956 SO libspdk_nvmf.so.20.0 00:04:16.214 CC lib/ftl/utils/ftl_bitmap.o 00:04:16.214 CC lib/ftl/utils/ftl_property.o 00:04:16.214 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:16.214 SYMLINK libspdk_nvmf.so 00:04:16.214 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:16.214 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:16.214 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:16.214 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:16.214 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:16.474 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:16.474 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:16.474 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:16.474 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:16.474 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:16.474 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:16.474 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:16.474 CC lib/ftl/base/ftl_base_dev.o 00:04:16.474 LIB libspdk_iscsi.a 00:04:16.474 CC lib/ftl/base/ftl_base_bdev.o 00:04:16.740 CC lib/ftl/ftl_trace.o 00:04:16.740 SO libspdk_iscsi.so.8.0 00:04:16.740 SYMLINK libspdk_iscsi.so 00:04:16.740 LIB libspdk_ftl.a 00:04:17.025 LIB libspdk_vhost.a 00:04:17.025 SO libspdk_ftl.so.9.0 00:04:17.025 SO libspdk_vhost.so.8.0 00:04:17.025 SYMLINK libspdk_vhost.so 00:04:17.283 SYMLINK libspdk_ftl.so 00:04:17.541 CC module/env_dpdk/env_dpdk_rpc.o 00:04:17.541 CC module/keyring/linux/keyring.o 00:04:17.541 CC module/blob/bdev/blob_bdev.o 00:04:17.541 CC module/keyring/file/keyring.o 00:04:17.541 CC module/accel/error/accel_error.o 00:04:17.541 CC module/sock/posix/posix.o 00:04:17.541 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:17.541 CC module/accel/ioat/accel_ioat.o 00:04:17.541 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:17.541 CC module/fsdev/aio/fsdev_aio.o 00:04:17.541 LIB libspdk_env_dpdk_rpc.a 00:04:17.541 SO libspdk_env_dpdk_rpc.so.6.0 00:04:17.541 SYMLINK libspdk_env_dpdk_rpc.so 00:04:17.541 CC module/keyring/linux/keyring_rpc.o 00:04:17.541 CC module/keyring/file/keyring_rpc.o 00:04:17.541 LIB libspdk_scheduler_dpdk_governor.a 00:04:17.799 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:17.799 CC module/accel/ioat/accel_ioat_rpc.o 00:04:17.799 LIB libspdk_keyring_linux.a 00:04:17.799 LIB libspdk_scheduler_dynamic.a 00:04:17.799 CC module/accel/error/accel_error_rpc.o 00:04:17.799 LIB libspdk_blob_bdev.a 00:04:17.799 SO libspdk_scheduler_dynamic.so.4.0 00:04:17.799 SO libspdk_keyring_linux.so.1.0 00:04:17.799 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:17.799 SO libspdk_blob_bdev.so.12.0 00:04:17.799 LIB libspdk_keyring_file.a 00:04:17.799 CC module/accel/dsa/accel_dsa.o 00:04:17.799 SO libspdk_keyring_file.so.2.0 00:04:17.799 SYMLINK libspdk_scheduler_dynamic.so 00:04:17.799 SYMLINK libspdk_keyring_linux.so 00:04:17.799 SYMLINK libspdk_blob_bdev.so 00:04:17.799 CC module/accel/dsa/accel_dsa_rpc.o 00:04:17.799 LIB libspdk_accel_ioat.a 00:04:17.799 SYMLINK libspdk_keyring_file.so 00:04:17.799 LIB libspdk_accel_error.a 00:04:17.799 SO libspdk_accel_ioat.so.6.0 00:04:17.799 SO libspdk_accel_error.so.2.0 00:04:17.799 SYMLINK libspdk_accel_error.so 00:04:17.799 SYMLINK libspdk_accel_ioat.so 00:04:17.799 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:17.799 CC module/fsdev/aio/linux_aio_mgr.o 00:04:17.799 CC module/scheduler/gscheduler/gscheduler.o 00:04:18.056 CC module/accel/iaa/accel_iaa.o 00:04:18.056 CC module/accel/iaa/accel_iaa_rpc.o 00:04:18.056 LIB libspdk_accel_dsa.a 00:04:18.056 LIB libspdk_scheduler_gscheduler.a 00:04:18.056 CC module/bdev/delay/vbdev_delay.o 00:04:18.056 SO libspdk_scheduler_gscheduler.so.4.0 00:04:18.056 CC module/blobfs/bdev/blobfs_bdev.o 00:04:18.056 SO libspdk_accel_dsa.so.5.0 00:04:18.056 CC module/bdev/error/vbdev_error.o 00:04:18.056 CC module/bdev/error/vbdev_error_rpc.o 00:04:18.056 SYMLINK libspdk_accel_dsa.so 00:04:18.056 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:18.056 SYMLINK libspdk_scheduler_gscheduler.so 00:04:18.056 LIB libspdk_accel_iaa.a 00:04:18.056 SO libspdk_accel_iaa.so.3.0 00:04:18.313 LIB libspdk_fsdev_aio.a 00:04:18.314 CC module/bdev/gpt/gpt.o 00:04:18.314 SO libspdk_fsdev_aio.so.1.0 00:04:18.314 CC module/bdev/lvol/vbdev_lvol.o 00:04:18.314 CC module/bdev/gpt/vbdev_gpt.o 00:04:18.314 SYMLINK libspdk_accel_iaa.so 00:04:18.314 LIB libspdk_sock_posix.a 00:04:18.314 LIB libspdk_blobfs_bdev.a 00:04:18.314 LIB libspdk_bdev_error.a 00:04:18.314 SO libspdk_blobfs_bdev.so.6.0 00:04:18.314 SO libspdk_sock_posix.so.6.0 00:04:18.314 SYMLINK libspdk_fsdev_aio.so 00:04:18.314 SO libspdk_bdev_error.so.6.0 00:04:18.314 CC module/bdev/malloc/bdev_malloc.o 00:04:18.314 SYMLINK libspdk_blobfs_bdev.so 00:04:18.314 SYMLINK libspdk_sock_posix.so 00:04:18.314 SYMLINK libspdk_bdev_error.so 00:04:18.314 CC module/bdev/null/bdev_null.o 00:04:18.314 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:18.314 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:18.571 CC module/bdev/nvme/bdev_nvme.o 00:04:18.571 CC module/bdev/passthru/vbdev_passthru.o 00:04:18.571 CC module/bdev/raid/bdev_raid.o 00:04:18.571 CC module/bdev/split/vbdev_split.o 00:04:18.571 LIB libspdk_bdev_gpt.a 00:04:18.571 SO libspdk_bdev_gpt.so.6.0 00:04:18.571 LIB libspdk_bdev_delay.a 00:04:18.571 CC module/bdev/raid/bdev_raid_rpc.o 00:04:18.571 SO libspdk_bdev_delay.so.6.0 00:04:18.571 SYMLINK libspdk_bdev_gpt.so 00:04:18.571 CC module/bdev/raid/bdev_raid_sb.o 00:04:18.571 SYMLINK libspdk_bdev_delay.so 00:04:18.571 CC module/bdev/raid/raid0.o 00:04:18.571 LIB libspdk_bdev_malloc.a 00:04:18.571 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:18.571 SO libspdk_bdev_malloc.so.6.0 00:04:18.571 CC module/bdev/null/bdev_null_rpc.o 00:04:18.571 SYMLINK libspdk_bdev_malloc.so 00:04:18.571 CC module/bdev/raid/raid1.o 00:04:18.571 CC module/bdev/split/vbdev_split_rpc.o 00:04:18.828 LIB libspdk_bdev_null.a 00:04:18.828 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:18.828 SO libspdk_bdev_null.so.6.0 00:04:18.828 LIB libspdk_bdev_split.a 00:04:18.828 SYMLINK libspdk_bdev_null.so 00:04:18.828 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:18.828 CC module/bdev/nvme/nvme_rpc.o 00:04:18.828 SO libspdk_bdev_split.so.6.0 00:04:18.828 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:18.828 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:18.828 LIB libspdk_bdev_passthru.a 00:04:18.828 LIB libspdk_bdev_lvol.a 00:04:18.828 SO libspdk_bdev_passthru.so.6.0 00:04:18.828 SYMLINK libspdk_bdev_split.so 00:04:18.828 SO libspdk_bdev_lvol.so.6.0 00:04:19.088 SYMLINK libspdk_bdev_passthru.so 00:04:19.088 SYMLINK libspdk_bdev_lvol.so 00:04:19.088 CC module/bdev/raid/concat.o 00:04:19.088 CC module/bdev/xnvme/bdev_xnvme.o 00:04:19.088 CC module/bdev/nvme/bdev_mdns_client.o 00:04:19.088 LIB libspdk_bdev_zone_block.a 00:04:19.088 CC module/bdev/aio/bdev_aio.o 00:04:19.088 CC module/bdev/ftl/bdev_ftl.o 00:04:19.088 SO libspdk_bdev_zone_block.so.6.0 00:04:19.088 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:19.088 CC module/bdev/iscsi/bdev_iscsi.o 00:04:19.088 SYMLINK libspdk_bdev_zone_block.so 00:04:19.088 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:19.088 CC module/bdev/aio/bdev_aio_rpc.o 00:04:19.346 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:19.346 CC module/bdev/nvme/vbdev_opal.o 00:04:19.346 LIB libspdk_bdev_xnvme.a 00:04:19.346 SO libspdk_bdev_xnvme.so.3.0 00:04:19.346 LIB libspdk_bdev_raid.a 00:04:19.346 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:19.346 SO libspdk_bdev_raid.so.6.0 00:04:19.346 LIB libspdk_bdev_ftl.a 00:04:19.346 SYMLINK libspdk_bdev_xnvme.so 00:04:19.346 SO libspdk_bdev_ftl.so.6.0 00:04:19.346 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:19.346 LIB libspdk_bdev_aio.a 00:04:19.346 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:19.346 SYMLINK libspdk_bdev_raid.so 00:04:19.346 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:19.346 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:19.603 LIB libspdk_bdev_iscsi.a 00:04:19.603 SO libspdk_bdev_aio.so.6.0 00:04:19.603 SYMLINK libspdk_bdev_ftl.so 00:04:19.603 SO libspdk_bdev_iscsi.so.6.0 00:04:19.603 SYMLINK libspdk_bdev_aio.so 00:04:19.603 SYMLINK libspdk_bdev_iscsi.so 00:04:19.860 LIB libspdk_bdev_virtio.a 00:04:19.860 SO libspdk_bdev_virtio.so.6.0 00:04:19.860 SYMLINK libspdk_bdev_virtio.so 00:04:20.795 LIB libspdk_bdev_nvme.a 00:04:21.053 SO libspdk_bdev_nvme.so.7.1 00:04:21.053 SYMLINK libspdk_bdev_nvme.so 00:04:21.621 CC module/event/subsystems/vmd/vmd.o 00:04:21.621 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:21.621 CC module/event/subsystems/scheduler/scheduler.o 00:04:21.621 CC module/event/subsystems/fsdev/fsdev.o 00:04:21.621 CC module/event/subsystems/keyring/keyring.o 00:04:21.621 CC module/event/subsystems/sock/sock.o 00:04:21.621 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:21.621 CC module/event/subsystems/iobuf/iobuf.o 00:04:21.621 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:21.621 LIB libspdk_event_keyring.a 00:04:21.621 SO libspdk_event_keyring.so.1.0 00:04:21.621 LIB libspdk_event_scheduler.a 00:04:21.621 LIB libspdk_event_vmd.a 00:04:21.621 LIB libspdk_event_sock.a 00:04:21.621 LIB libspdk_event_fsdev.a 00:04:21.621 SO libspdk_event_scheduler.so.4.0 00:04:21.621 SO libspdk_event_fsdev.so.1.0 00:04:21.621 SO libspdk_event_sock.so.5.0 00:04:21.621 LIB libspdk_event_vhost_blk.a 00:04:21.621 LIB libspdk_event_iobuf.a 00:04:21.621 SO libspdk_event_vmd.so.6.0 00:04:21.621 SYMLINK libspdk_event_keyring.so 00:04:21.621 SO libspdk_event_vhost_blk.so.3.0 00:04:21.621 SO libspdk_event_iobuf.so.3.0 00:04:21.621 SYMLINK libspdk_event_fsdev.so 00:04:21.621 SYMLINK libspdk_event_scheduler.so 00:04:21.621 SYMLINK libspdk_event_sock.so 00:04:21.621 SYMLINK libspdk_event_vmd.so 00:04:21.621 SYMLINK libspdk_event_iobuf.so 00:04:21.621 SYMLINK libspdk_event_vhost_blk.so 00:04:21.882 CC module/event/subsystems/accel/accel.o 00:04:22.142 LIB libspdk_event_accel.a 00:04:22.142 SO libspdk_event_accel.so.6.0 00:04:22.142 SYMLINK libspdk_event_accel.so 00:04:22.400 CC module/event/subsystems/bdev/bdev.o 00:04:22.658 LIB libspdk_event_bdev.a 00:04:22.658 SO libspdk_event_bdev.so.6.0 00:04:22.658 SYMLINK libspdk_event_bdev.so 00:04:22.916 CC module/event/subsystems/nbd/nbd.o 00:04:22.916 CC module/event/subsystems/ublk/ublk.o 00:04:22.916 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:22.916 CC module/event/subsystems/scsi/scsi.o 00:04:22.916 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:22.916 LIB libspdk_event_nbd.a 00:04:22.916 LIB libspdk_event_ublk.a 00:04:22.916 SO libspdk_event_nbd.so.6.0 00:04:22.916 LIB libspdk_event_scsi.a 00:04:22.916 SO libspdk_event_ublk.so.3.0 00:04:22.916 SO libspdk_event_scsi.so.6.0 00:04:22.916 SYMLINK libspdk_event_nbd.so 00:04:22.916 SYMLINK libspdk_event_ublk.so 00:04:22.916 LIB libspdk_event_nvmf.a 00:04:22.916 SYMLINK libspdk_event_scsi.so 00:04:22.916 SO libspdk_event_nvmf.so.6.0 00:04:23.174 SYMLINK libspdk_event_nvmf.so 00:04:23.174 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:23.174 CC module/event/subsystems/iscsi/iscsi.o 00:04:23.174 LIB libspdk_event_vhost_scsi.a 00:04:23.431 SO libspdk_event_vhost_scsi.so.3.0 00:04:23.431 LIB libspdk_event_iscsi.a 00:04:23.431 SO libspdk_event_iscsi.so.6.0 00:04:23.431 SYMLINK libspdk_event_vhost_scsi.so 00:04:23.431 SYMLINK libspdk_event_iscsi.so 00:04:23.431 SO libspdk.so.6.0 00:04:23.431 SYMLINK libspdk.so 00:04:23.689 CXX app/trace/trace.o 00:04:23.689 CC app/trace_record/trace_record.o 00:04:23.689 CC app/spdk_lspci/spdk_lspci.o 00:04:23.689 CC app/spdk_tgt/spdk_tgt.o 00:04:23.689 CC app/nvmf_tgt/nvmf_main.o 00:04:23.689 CC app/iscsi_tgt/iscsi_tgt.o 00:04:23.689 CC examples/util/zipf/zipf.o 00:04:23.689 CC test/thread/poller_perf/poller_perf.o 00:04:23.947 CC test/dma/test_dma/test_dma.o 00:04:23.947 CC test/app/bdev_svc/bdev_svc.o 00:04:23.947 LINK spdk_lspci 00:04:23.947 LINK spdk_tgt 00:04:23.947 LINK spdk_trace_record 00:04:23.947 LINK iscsi_tgt 00:04:23.947 LINK zipf 00:04:23.947 LINK poller_perf 00:04:23.947 LINK nvmf_tgt 00:04:23.947 LINK bdev_svc 00:04:23.947 LINK spdk_trace 00:04:24.204 CC app/spdk_nvme_perf/perf.o 00:04:24.204 CC app/spdk_nvme_identify/identify.o 00:04:24.204 CC app/spdk_nvme_discover/discovery_aer.o 00:04:24.204 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:24.204 CC examples/ioat/perf/perf.o 00:04:24.204 CC examples/vmd/lsvmd/lsvmd.o 00:04:24.204 CC examples/idxd/perf/perf.o 00:04:24.204 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:24.204 CC examples/vmd/led/led.o 00:04:24.204 LINK spdk_nvme_discover 00:04:24.204 LINK lsvmd 00:04:24.204 LINK test_dma 00:04:24.463 LINK ioat_perf 00:04:24.463 LINK led 00:04:24.463 LINK idxd_perf 00:04:24.463 LINK nvme_fuzz 00:04:24.463 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:24.463 CC examples/ioat/verify/verify.o 00:04:24.727 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:24.727 CC examples/thread/thread/thread_ex.o 00:04:24.727 CC examples/sock/hello_world/hello_sock.o 00:04:24.727 CC test/app/histogram_perf/histogram_perf.o 00:04:24.727 LINK interrupt_tgt 00:04:24.727 TEST_HEADER include/spdk/accel.h 00:04:24.727 TEST_HEADER include/spdk/accel_module.h 00:04:24.727 TEST_HEADER include/spdk/assert.h 00:04:24.727 TEST_HEADER include/spdk/barrier.h 00:04:24.727 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:24.727 TEST_HEADER include/spdk/base64.h 00:04:24.727 TEST_HEADER include/spdk/bdev.h 00:04:24.727 TEST_HEADER include/spdk/bdev_module.h 00:04:24.727 TEST_HEADER include/spdk/bdev_zone.h 00:04:24.727 TEST_HEADER include/spdk/bit_array.h 00:04:24.727 TEST_HEADER include/spdk/bit_pool.h 00:04:24.727 TEST_HEADER include/spdk/blob_bdev.h 00:04:24.727 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:24.727 TEST_HEADER include/spdk/blobfs.h 00:04:24.727 LINK verify 00:04:24.727 TEST_HEADER include/spdk/blob.h 00:04:24.727 TEST_HEADER include/spdk/conf.h 00:04:24.727 TEST_HEADER include/spdk/config.h 00:04:24.727 TEST_HEADER include/spdk/cpuset.h 00:04:24.727 TEST_HEADER include/spdk/crc16.h 00:04:24.727 TEST_HEADER include/spdk/crc32.h 00:04:24.727 TEST_HEADER include/spdk/crc64.h 00:04:24.727 TEST_HEADER include/spdk/dif.h 00:04:24.727 TEST_HEADER include/spdk/dma.h 00:04:24.727 TEST_HEADER include/spdk/endian.h 00:04:24.727 TEST_HEADER include/spdk/env_dpdk.h 00:04:24.727 TEST_HEADER include/spdk/env.h 00:04:24.727 TEST_HEADER include/spdk/event.h 00:04:24.727 TEST_HEADER include/spdk/fd_group.h 00:04:24.727 TEST_HEADER include/spdk/fd.h 00:04:24.727 TEST_HEADER include/spdk/file.h 00:04:24.727 TEST_HEADER include/spdk/fsdev.h 00:04:24.727 TEST_HEADER include/spdk/fsdev_module.h 00:04:24.727 TEST_HEADER include/spdk/ftl.h 00:04:24.727 TEST_HEADER include/spdk/gpt_spec.h 00:04:24.727 TEST_HEADER include/spdk/hexlify.h 00:04:24.727 TEST_HEADER include/spdk/histogram_data.h 00:04:24.727 TEST_HEADER include/spdk/idxd.h 00:04:24.727 TEST_HEADER include/spdk/idxd_spec.h 00:04:24.727 TEST_HEADER include/spdk/init.h 00:04:24.727 TEST_HEADER include/spdk/ioat.h 00:04:24.727 TEST_HEADER include/spdk/ioat_spec.h 00:04:24.727 TEST_HEADER include/spdk/iscsi_spec.h 00:04:24.727 TEST_HEADER include/spdk/json.h 00:04:24.727 TEST_HEADER include/spdk/jsonrpc.h 00:04:24.727 TEST_HEADER include/spdk/keyring.h 00:04:24.727 TEST_HEADER include/spdk/keyring_module.h 00:04:24.727 TEST_HEADER include/spdk/likely.h 00:04:24.727 TEST_HEADER include/spdk/log.h 00:04:24.727 TEST_HEADER include/spdk/lvol.h 00:04:24.727 TEST_HEADER include/spdk/md5.h 00:04:24.727 TEST_HEADER include/spdk/memory.h 00:04:24.727 TEST_HEADER include/spdk/mmio.h 00:04:24.727 TEST_HEADER include/spdk/nbd.h 00:04:24.727 TEST_HEADER include/spdk/net.h 00:04:24.727 TEST_HEADER include/spdk/notify.h 00:04:24.727 TEST_HEADER include/spdk/nvme.h 00:04:24.727 TEST_HEADER include/spdk/nvme_intel.h 00:04:24.727 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:24.727 LINK histogram_perf 00:04:24.727 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:24.727 TEST_HEADER include/spdk/nvme_spec.h 00:04:24.727 TEST_HEADER include/spdk/nvme_zns.h 00:04:24.727 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:24.727 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:24.727 TEST_HEADER include/spdk/nvmf.h 00:04:24.727 TEST_HEADER include/spdk/nvmf_spec.h 00:04:24.727 TEST_HEADER include/spdk/nvmf_transport.h 00:04:24.727 TEST_HEADER include/spdk/opal.h 00:04:24.727 TEST_HEADER include/spdk/opal_spec.h 00:04:24.727 TEST_HEADER include/spdk/pci_ids.h 00:04:24.727 TEST_HEADER include/spdk/pipe.h 00:04:24.727 TEST_HEADER include/spdk/queue.h 00:04:24.727 TEST_HEADER include/spdk/reduce.h 00:04:24.727 TEST_HEADER include/spdk/rpc.h 00:04:24.727 TEST_HEADER include/spdk/scheduler.h 00:04:24.727 TEST_HEADER include/spdk/scsi.h 00:04:24.727 TEST_HEADER include/spdk/scsi_spec.h 00:04:24.727 TEST_HEADER include/spdk/sock.h 00:04:24.727 TEST_HEADER include/spdk/stdinc.h 00:04:24.727 LINK thread 00:04:24.727 TEST_HEADER include/spdk/string.h 00:04:24.727 TEST_HEADER include/spdk/thread.h 00:04:24.727 TEST_HEADER include/spdk/trace.h 00:04:24.727 TEST_HEADER include/spdk/trace_parser.h 00:04:24.727 TEST_HEADER include/spdk/tree.h 00:04:24.727 TEST_HEADER include/spdk/ublk.h 00:04:24.727 TEST_HEADER include/spdk/util.h 00:04:24.727 TEST_HEADER include/spdk/uuid.h 00:04:24.727 TEST_HEADER include/spdk/version.h 00:04:24.727 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:24.727 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:24.727 TEST_HEADER include/spdk/vhost.h 00:04:24.727 TEST_HEADER include/spdk/vmd.h 00:04:24.727 TEST_HEADER include/spdk/xor.h 00:04:24.727 TEST_HEADER include/spdk/zipf.h 00:04:24.727 CXX test/cpp_headers/accel.o 00:04:25.003 CC app/spdk_top/spdk_top.o 00:04:25.003 LINK hello_sock 00:04:25.003 LINK spdk_nvme_identify 00:04:25.003 CC app/vhost/vhost.o 00:04:25.003 CC test/app/jsoncat/jsoncat.o 00:04:25.003 LINK spdk_nvme_perf 00:04:25.003 CXX test/cpp_headers/accel_module.o 00:04:25.003 CC test/app/stub/stub.o 00:04:25.003 CXX test/cpp_headers/assert.o 00:04:25.003 LINK jsoncat 00:04:25.003 LINK vhost_fuzz 00:04:25.003 LINK vhost 00:04:25.003 LINK stub 00:04:25.003 CC examples/accel/perf/accel_perf.o 00:04:25.264 CXX test/cpp_headers/barrier.o 00:04:25.264 CC app/spdk_dd/spdk_dd.o 00:04:25.264 CXX test/cpp_headers/base64.o 00:04:25.264 CXX test/cpp_headers/bdev.o 00:04:25.264 CC examples/blob/hello_world/hello_blob.o 00:04:25.264 CC examples/nvme/hello_world/hello_world.o 00:04:25.524 CXX test/cpp_headers/bdev_module.o 00:04:25.524 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:25.524 CC test/event/event_perf/event_perf.o 00:04:25.524 LINK hello_blob 00:04:25.524 LINK accel_perf 00:04:25.524 LINK spdk_dd 00:04:25.524 CC test/env/mem_callbacks/mem_callbacks.o 00:04:25.524 CXX test/cpp_headers/bdev_zone.o 00:04:25.524 LINK event_perf 00:04:25.524 LINK hello_world 00:04:25.785 CC test/env/vtophys/vtophys.o 00:04:25.785 LINK hello_fsdev 00:04:25.785 LINK mem_callbacks 00:04:25.785 CC examples/blob/cli/blobcli.o 00:04:25.785 CXX test/cpp_headers/bit_array.o 00:04:25.785 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:25.785 CC test/event/reactor/reactor.o 00:04:25.785 LINK spdk_top 00:04:25.785 CC examples/nvme/reconnect/reconnect.o 00:04:25.785 LINK vtophys 00:04:25.785 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:25.785 LINK env_dpdk_post_init 00:04:25.785 CXX test/cpp_headers/bit_pool.o 00:04:25.785 LINK reactor 00:04:26.045 CC examples/nvme/arbitration/arbitration.o 00:04:26.045 LINK iscsi_fuzz 00:04:26.045 CC examples/nvme/hotplug/hotplug.o 00:04:26.045 CXX test/cpp_headers/blob_bdev.o 00:04:26.045 CC test/env/memory/memory_ut.o 00:04:26.045 CC app/fio/nvme/fio_plugin.o 00:04:26.045 CC test/event/reactor_perf/reactor_perf.o 00:04:26.045 LINK reconnect 00:04:26.045 CXX test/cpp_headers/blobfs_bdev.o 00:04:26.306 LINK blobcli 00:04:26.306 LINK reactor_perf 00:04:26.306 LINK hotplug 00:04:26.306 CXX test/cpp_headers/blobfs.o 00:04:26.306 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:26.306 LINK nvme_manage 00:04:26.306 LINK arbitration 00:04:26.306 CXX test/cpp_headers/blob.o 00:04:26.306 CXX test/cpp_headers/conf.o 00:04:26.306 CC examples/nvme/abort/abort.o 00:04:26.306 LINK cmb_copy 00:04:26.306 CC test/event/app_repeat/app_repeat.o 00:04:26.306 CC test/env/pci/pci_ut.o 00:04:26.566 CXX test/cpp_headers/config.o 00:04:26.566 CC app/fio/bdev/fio_plugin.o 00:04:26.566 CXX test/cpp_headers/cpuset.o 00:04:26.566 LINK app_repeat 00:04:26.566 CC examples/bdev/hello_world/hello_bdev.o 00:04:26.566 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:26.566 CC examples/bdev/bdevperf/bdevperf.o 00:04:26.566 LINK spdk_nvme 00:04:26.566 CXX test/cpp_headers/crc16.o 00:04:26.566 CXX test/cpp_headers/crc32.o 00:04:26.566 LINK pmr_persistence 00:04:26.825 LINK abort 00:04:26.826 LINK hello_bdev 00:04:26.826 CC test/event/scheduler/scheduler.o 00:04:26.826 CXX test/cpp_headers/crc64.o 00:04:26.826 LINK memory_ut 00:04:26.826 LINK pci_ut 00:04:26.826 CC test/rpc_client/rpc_client_test.o 00:04:26.826 CC test/nvme/aer/aer.o 00:04:26.826 CC test/nvme/reset/reset.o 00:04:26.826 CXX test/cpp_headers/dif.o 00:04:26.826 CXX test/cpp_headers/dma.o 00:04:26.826 LINK spdk_bdev 00:04:27.084 CC test/nvme/sgl/sgl.o 00:04:27.084 LINK scheduler 00:04:27.084 CXX test/cpp_headers/endian.o 00:04:27.084 CXX test/cpp_headers/env_dpdk.o 00:04:27.084 LINK rpc_client_test 00:04:27.084 LINK aer 00:04:27.084 CC test/nvme/e2edp/nvme_dp.o 00:04:27.084 CXX test/cpp_headers/env.o 00:04:27.084 CXX test/cpp_headers/event.o 00:04:27.084 LINK bdevperf 00:04:27.084 LINK reset 00:04:27.342 LINK sgl 00:04:27.342 CC test/accel/dif/dif.o 00:04:27.342 CXX test/cpp_headers/fd_group.o 00:04:27.342 CC test/nvme/overhead/overhead.o 00:04:27.342 CC test/nvme/err_injection/err_injection.o 00:04:27.342 CC test/blobfs/mkfs/mkfs.o 00:04:27.342 CC test/nvme/startup/startup.o 00:04:27.342 CC test/nvme/reserve/reserve.o 00:04:27.342 CXX test/cpp_headers/fd.o 00:04:27.342 CC test/nvme/simple_copy/simple_copy.o 00:04:27.342 LINK nvme_dp 00:04:27.600 LINK err_injection 00:04:27.600 LINK mkfs 00:04:27.600 LINK startup 00:04:27.600 LINK overhead 00:04:27.600 CC examples/nvmf/nvmf/nvmf.o 00:04:27.600 CXX test/cpp_headers/file.o 00:04:27.600 LINK reserve 00:04:27.600 LINK simple_copy 00:04:27.600 CXX test/cpp_headers/fsdev.o 00:04:27.600 CC test/nvme/connect_stress/connect_stress.o 00:04:27.600 CC test/nvme/compliance/nvme_compliance.o 00:04:27.600 CC test/nvme/boot_partition/boot_partition.o 00:04:27.858 LINK nvmf 00:04:27.858 CC test/nvme/fused_ordering/fused_ordering.o 00:04:27.858 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:27.858 CXX test/cpp_headers/fsdev_module.o 00:04:27.858 CXX test/cpp_headers/ftl.o 00:04:27.858 LINK connect_stress 00:04:27.858 LINK boot_partition 00:04:27.858 CC test/lvol/esnap/esnap.o 00:04:27.858 LINK doorbell_aers 00:04:27.858 CXX test/cpp_headers/gpt_spec.o 00:04:27.858 LINK fused_ordering 00:04:27.858 CXX test/cpp_headers/hexlify.o 00:04:27.858 LINK dif 00:04:27.858 CC test/nvme/fdp/fdp.o 00:04:27.858 CXX test/cpp_headers/histogram_data.o 00:04:28.115 CXX test/cpp_headers/idxd.o 00:04:28.115 CXX test/cpp_headers/idxd_spec.o 00:04:28.115 LINK nvme_compliance 00:04:28.115 CXX test/cpp_headers/init.o 00:04:28.115 CC test/nvme/cuse/cuse.o 00:04:28.115 CXX test/cpp_headers/ioat.o 00:04:28.115 CXX test/cpp_headers/ioat_spec.o 00:04:28.115 CXX test/cpp_headers/iscsi_spec.o 00:04:28.115 CXX test/cpp_headers/json.o 00:04:28.115 CXX test/cpp_headers/jsonrpc.o 00:04:28.115 CXX test/cpp_headers/keyring.o 00:04:28.115 CXX test/cpp_headers/keyring_module.o 00:04:28.115 CXX test/cpp_headers/likely.o 00:04:28.116 CXX test/cpp_headers/log.o 00:04:28.116 CXX test/cpp_headers/lvol.o 00:04:28.116 CXX test/cpp_headers/md5.o 00:04:28.116 LINK fdp 00:04:28.116 CXX test/cpp_headers/memory.o 00:04:28.373 CXX test/cpp_headers/mmio.o 00:04:28.373 CXX test/cpp_headers/nbd.o 00:04:28.373 CXX test/cpp_headers/net.o 00:04:28.373 CC test/bdev/bdevio/bdevio.o 00:04:28.373 CXX test/cpp_headers/notify.o 00:04:28.373 CXX test/cpp_headers/nvme.o 00:04:28.373 CXX test/cpp_headers/nvme_intel.o 00:04:28.373 CXX test/cpp_headers/nvme_ocssd.o 00:04:28.373 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:28.373 CXX test/cpp_headers/nvme_spec.o 00:04:28.373 CXX test/cpp_headers/nvme_zns.o 00:04:28.373 CXX test/cpp_headers/nvmf_cmd.o 00:04:28.373 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:28.373 CXX test/cpp_headers/nvmf.o 00:04:28.374 CXX test/cpp_headers/nvmf_spec.o 00:04:28.632 CXX test/cpp_headers/nvmf_transport.o 00:04:28.632 CXX test/cpp_headers/opal.o 00:04:28.632 CXX test/cpp_headers/opal_spec.o 00:04:28.632 CXX test/cpp_headers/pci_ids.o 00:04:28.632 CXX test/cpp_headers/pipe.o 00:04:28.632 CXX test/cpp_headers/queue.o 00:04:28.632 CXX test/cpp_headers/reduce.o 00:04:28.632 CXX test/cpp_headers/rpc.o 00:04:28.632 CXX test/cpp_headers/scheduler.o 00:04:28.632 CXX test/cpp_headers/scsi.o 00:04:28.632 CXX test/cpp_headers/scsi_spec.o 00:04:28.632 CXX test/cpp_headers/sock.o 00:04:28.632 CXX test/cpp_headers/stdinc.o 00:04:28.632 LINK bdevio 00:04:28.632 CXX test/cpp_headers/string.o 00:04:28.632 CXX test/cpp_headers/thread.o 00:04:28.890 CXX test/cpp_headers/trace.o 00:04:28.890 CXX test/cpp_headers/trace_parser.o 00:04:28.890 CXX test/cpp_headers/tree.o 00:04:28.890 CXX test/cpp_headers/ublk.o 00:04:28.890 CXX test/cpp_headers/util.o 00:04:28.890 CXX test/cpp_headers/uuid.o 00:04:28.890 CXX test/cpp_headers/version.o 00:04:28.890 CXX test/cpp_headers/vfio_user_pci.o 00:04:28.890 CXX test/cpp_headers/vfio_user_spec.o 00:04:28.890 CXX test/cpp_headers/vhost.o 00:04:28.890 CXX test/cpp_headers/vmd.o 00:04:28.890 CXX test/cpp_headers/xor.o 00:04:28.890 CXX test/cpp_headers/zipf.o 00:04:29.148 LINK cuse 00:04:32.441 LINK esnap 00:04:32.703 00:04:32.703 real 1m2.744s 00:04:32.703 user 5m2.632s 00:04:32.703 sys 0m49.998s 00:04:32.703 01:04:06 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:32.703 01:04:06 make -- common/autotest_common.sh@10 -- $ set +x 00:04:32.703 ************************************ 00:04:32.703 END TEST make 00:04:32.703 ************************************ 00:04:32.703 01:04:06 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:32.703 01:04:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:32.703 01:04:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:32.703 01:04:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:32.703 01:04:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:32.703 01:04:06 -- pm/common@44 -- $ pid=5787 00:04:32.703 01:04:06 -- pm/common@50 -- $ kill -TERM 5787 00:04:32.703 01:04:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:32.703 01:04:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:32.703 01:04:06 -- pm/common@44 -- $ pid=5788 00:04:32.703 01:04:06 -- pm/common@50 -- $ kill -TERM 5788 00:04:32.703 01:04:06 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:32.703 01:04:06 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:32.703 01:04:06 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:32.703 01:04:06 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:32.703 01:04:06 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:32.703 01:04:06 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:32.703 01:04:06 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.703 01:04:06 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.703 01:04:06 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.703 01:04:06 -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.703 01:04:06 -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.703 01:04:06 -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.703 01:04:06 -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.703 01:04:06 -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.703 01:04:06 -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.703 01:04:06 -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.703 01:04:06 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.703 01:04:06 -- scripts/common.sh@344 -- # case "$op" in 00:04:32.703 01:04:06 -- scripts/common.sh@345 -- # : 1 00:04:32.703 01:04:06 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.703 01:04:06 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.703 01:04:06 -- scripts/common.sh@365 -- # decimal 1 00:04:32.703 01:04:06 -- scripts/common.sh@353 -- # local d=1 00:04:32.703 01:04:06 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.703 01:04:06 -- scripts/common.sh@355 -- # echo 1 00:04:32.703 01:04:06 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.703 01:04:06 -- scripts/common.sh@366 -- # decimal 2 00:04:32.703 01:04:06 -- scripts/common.sh@353 -- # local d=2 00:04:32.703 01:04:06 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.703 01:04:06 -- scripts/common.sh@355 -- # echo 2 00:04:32.703 01:04:06 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.703 01:04:06 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.703 01:04:06 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.703 01:04:06 -- scripts/common.sh@368 -- # return 0 00:04:32.703 01:04:06 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.703 01:04:06 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:32.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.704 --rc genhtml_branch_coverage=1 00:04:32.704 --rc genhtml_function_coverage=1 00:04:32.704 --rc genhtml_legend=1 00:04:32.704 --rc geninfo_all_blocks=1 00:04:32.704 --rc geninfo_unexecuted_blocks=1 00:04:32.704 00:04:32.704 ' 00:04:32.704 01:04:06 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:32.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.704 --rc genhtml_branch_coverage=1 00:04:32.704 --rc genhtml_function_coverage=1 00:04:32.704 --rc genhtml_legend=1 00:04:32.704 --rc geninfo_all_blocks=1 00:04:32.704 --rc geninfo_unexecuted_blocks=1 00:04:32.704 00:04:32.704 ' 00:04:32.704 01:04:06 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:32.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.704 --rc genhtml_branch_coverage=1 00:04:32.704 --rc genhtml_function_coverage=1 00:04:32.704 --rc genhtml_legend=1 00:04:32.704 --rc geninfo_all_blocks=1 00:04:32.704 --rc geninfo_unexecuted_blocks=1 00:04:32.704 00:04:32.704 ' 00:04:32.704 01:04:06 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:32.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.704 --rc genhtml_branch_coverage=1 00:04:32.704 --rc genhtml_function_coverage=1 00:04:32.704 --rc genhtml_legend=1 00:04:32.704 --rc geninfo_all_blocks=1 00:04:32.704 --rc geninfo_unexecuted_blocks=1 00:04:32.704 00:04:32.704 ' 00:04:32.704 01:04:06 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:32.704 01:04:06 -- nvmf/common.sh@7 -- # uname -s 00:04:32.704 01:04:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:32.704 01:04:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:32.704 01:04:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:32.704 01:04:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:32.704 01:04:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:32.704 01:04:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:32.704 01:04:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:32.704 01:04:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:32.704 01:04:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:32.704 01:04:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:32.704 01:04:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4400a911-6c1d-4816-8e54-c91ba5397ac7 00:04:32.704 01:04:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=4400a911-6c1d-4816-8e54-c91ba5397ac7 00:04:32.704 01:04:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:32.704 01:04:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:32.704 01:04:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:32.704 01:04:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:32.704 01:04:06 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:32.704 01:04:06 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:32.704 01:04:06 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:32.704 01:04:06 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:32.704 01:04:06 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:32.704 01:04:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.704 01:04:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.704 01:04:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.704 01:04:06 -- paths/export.sh@5 -- # export PATH 00:04:32.704 01:04:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.704 01:04:06 -- nvmf/common.sh@51 -- # : 0 00:04:32.704 01:04:06 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:32.704 01:04:06 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:32.704 01:04:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:32.704 01:04:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:32.704 01:04:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:32.704 01:04:06 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:32.704 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:32.704 01:04:06 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:32.704 01:04:06 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:32.704 01:04:06 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:32.704 01:04:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:32.704 01:04:06 -- spdk/autotest.sh@32 -- # uname -s 00:04:32.704 01:04:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:32.704 01:04:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:32.704 01:04:06 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:32.704 01:04:06 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:32.704 01:04:06 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:32.704 01:04:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:32.704 01:04:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:32.704 01:04:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:32.704 01:04:06 -- spdk/autotest.sh@48 -- # udevadm_pid=67986 00:04:32.704 01:04:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:32.704 01:04:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:32.704 01:04:06 -- pm/common@17 -- # local monitor 00:04:32.704 01:04:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:32.704 01:04:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:32.704 01:04:06 -- pm/common@25 -- # sleep 1 00:04:32.704 01:04:06 -- pm/common@21 -- # date +%s 00:04:32.704 01:04:06 -- pm/common@21 -- # date +%s 00:04:32.965 01:04:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734138246 00:04:32.965 01:04:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734138246 00:04:32.965 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734138246_collect-vmstat.pm.log 00:04:32.965 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734138246_collect-cpu-load.pm.log 00:04:33.907 01:04:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:33.907 01:04:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:33.907 01:04:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:33.907 01:04:07 -- common/autotest_common.sh@10 -- # set +x 00:04:33.907 01:04:07 -- spdk/autotest.sh@59 -- # create_test_list 00:04:33.907 01:04:07 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:33.907 01:04:07 -- common/autotest_common.sh@10 -- # set +x 00:04:33.907 01:04:07 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:33.907 01:04:07 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:33.907 01:04:07 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:33.907 01:04:07 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:33.907 01:04:07 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:33.907 01:04:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:33.907 01:04:07 -- common/autotest_common.sh@1457 -- # uname 00:04:33.907 01:04:07 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:33.907 01:04:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:33.907 01:04:07 -- common/autotest_common.sh@1477 -- # uname 00:04:33.907 01:04:07 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:33.907 01:04:07 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:33.907 01:04:07 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:33.907 lcov: LCOV version 1.15 00:04:33.907 01:04:07 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:48.856 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:48.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:03.722 01:04:37 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:03.722 01:04:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.722 01:04:37 -- common/autotest_common.sh@10 -- # set +x 00:05:03.722 01:04:37 -- spdk/autotest.sh@78 -- # rm -f 00:05:03.722 01:04:37 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:03.980 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.546 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:04.546 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:04.546 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:04.546 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:04.546 01:04:38 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:04.546 01:04:38 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:04.546 01:04:38 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:04.546 01:04:38 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:04.546 01:04:38 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:04.546 01:04:38 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:04.546 01:04:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:04.546 01:04:38 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:05:04.546 01:04:38 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:04.546 01:04:38 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:04.546 01:04:38 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:04.546 01:04:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:04.546 01:04:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:04.546 01:04:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:04.546 01:04:38 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:05:04.546 01:04:38 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:04.546 01:04:38 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:04.546 01:04:38 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:04.546 01:04:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:04.546 01:04:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:04.546 01:04:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:04.546 01:04:38 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:05:04.546 01:04:38 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:04.546 01:04:38 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:05:04.546 01:04:38 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:04.546 01:04:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:04.546 01:04:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:04.546 01:04:38 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:04.546 01:04:38 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:05:04.546 01:04:38 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:05:04.546 01:04:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:04.546 01:04:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:04.546 01:04:38 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:04.546 01:04:38 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:05:04.546 01:04:38 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:05:04.546 01:04:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:04.546 01:04:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:04.546 01:04:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:04.546 01:04:38 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:05:04.547 01:04:38 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:04.547 01:04:38 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:05:04.547 01:04:38 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:04.547 01:04:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:04.547 01:04:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:04.547 01:04:38 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:04.547 01:04:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.547 01:04:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:04.547 01:04:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:04.547 01:04:38 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:04.547 01:04:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:04.547 No valid GPT data, bailing 00:05:04.547 01:04:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:04.804 01:04:38 -- scripts/common.sh@394 -- # pt= 00:05:04.804 01:04:38 -- scripts/common.sh@395 -- # return 1 00:05:04.804 01:04:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:04.804 1+0 records in 00:05:04.804 1+0 records out 00:05:04.804 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252878 s, 41.5 MB/s 00:05:04.804 01:04:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.804 01:04:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:04.804 01:04:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:04.805 01:04:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:04.805 01:04:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:04.805 No valid GPT data, bailing 00:05:04.805 01:04:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:04.805 01:04:38 -- scripts/common.sh@394 -- # pt= 00:05:04.805 01:04:38 -- scripts/common.sh@395 -- # return 1 00:05:04.805 01:04:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:04.805 1+0 records in 00:05:04.805 1+0 records out 00:05:04.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00540426 s, 194 MB/s 00:05:04.805 01:04:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.805 01:04:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:04.805 01:04:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:04.805 01:04:38 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:04.805 01:04:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:04.805 No valid GPT data, bailing 00:05:04.805 01:04:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:04.805 01:04:38 -- scripts/common.sh@394 -- # pt= 00:05:04.805 01:04:38 -- scripts/common.sh@395 -- # return 1 00:05:04.805 01:04:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:04.805 1+0 records in 00:05:04.805 1+0 records out 00:05:04.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00569245 s, 184 MB/s 00:05:04.805 01:04:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.805 01:04:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:04.805 01:04:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:04.805 01:04:38 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:04.805 01:04:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:04.805 No valid GPT data, bailing 00:05:04.805 01:04:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:05.063 01:04:38 -- scripts/common.sh@394 -- # pt= 00:05:05.063 01:04:38 -- scripts/common.sh@395 -- # return 1 00:05:05.063 01:04:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:05.063 1+0 records in 00:05:05.063 1+0 records out 00:05:05.063 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.005188 s, 202 MB/s 00:05:05.063 01:04:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:05.063 01:04:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:05.063 01:04:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:05.063 01:04:38 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:05.063 01:04:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:05.063 No valid GPT data, bailing 00:05:05.063 01:04:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:05.063 01:04:38 -- scripts/common.sh@394 -- # pt= 00:05:05.063 01:04:38 -- scripts/common.sh@395 -- # return 1 00:05:05.063 01:04:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:05.063 1+0 records in 00:05:05.063 1+0 records out 00:05:05.063 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00533076 s, 197 MB/s 00:05:05.063 01:04:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:05.063 01:04:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:05.063 01:04:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:05.063 01:04:38 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:05.063 01:04:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:05.063 No valid GPT data, bailing 00:05:05.063 01:04:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:05.063 01:04:38 -- scripts/common.sh@394 -- # pt= 00:05:05.063 01:04:38 -- scripts/common.sh@395 -- # return 1 00:05:05.063 01:04:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:05.063 1+0 records in 00:05:05.063 1+0 records out 00:05:05.063 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00566536 s, 185 MB/s 00:05:05.063 01:04:38 -- spdk/autotest.sh@105 -- # sync 00:05:05.355 01:04:38 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:05.355 01:04:38 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:05.355 01:04:38 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:07.262 01:04:40 -- spdk/autotest.sh@111 -- # uname -s 00:05:07.262 01:04:40 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:07.262 01:04:40 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:07.262 01:04:40 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:07.262 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:07.831 Hugepages 00:05:07.831 node hugesize free / total 00:05:07.831 node0 1048576kB 0 / 0 00:05:07.831 node0 2048kB 0 / 0 00:05:07.831 00:05:07.831 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:07.831 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:07.831 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:08.090 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:08.090 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:08.090 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:08.090 01:04:41 -- spdk/autotest.sh@117 -- # uname -s 00:05:08.090 01:04:41 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:08.090 01:04:41 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:08.090 01:04:41 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:08.657 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:09.263 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:09.263 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:09.263 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:09.263 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:09.263 01:04:42 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:10.202 01:04:43 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:10.202 01:04:43 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:10.202 01:04:43 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:10.202 01:04:43 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:10.202 01:04:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:10.202 01:04:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:10.202 01:04:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:10.202 01:04:43 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:10.202 01:04:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:10.202 01:04:43 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:10.202 01:04:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:10.202 01:04:43 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:10.768 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.768 Waiting for block devices as requested 00:05:10.768 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:10.768 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:11.026 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:11.026 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:16.310 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:16.310 01:04:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:16.310 01:04:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:16.310 01:04:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:16.310 01:04:49 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:16.310 01:04:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:16.310 01:04:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:16.310 01:04:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:16.310 01:04:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:16.310 01:04:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:16.310 01:04:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:16.310 01:04:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:16.310 01:04:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:16.310 01:04:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:16.310 01:04:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:16.310 01:04:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:16.310 01:04:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:16.310 01:04:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:16.310 01:04:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:16.310 01:04:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:16.310 01:04:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:16.310 01:04:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:16.310 01:04:49 -- common/autotest_common.sh@1543 -- # continue 00:05:16.310 01:04:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:16.310 01:04:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:16.310 01:04:49 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:16.310 01:04:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:16.310 01:04:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:16.310 01:04:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:16.310 01:04:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:16.310 01:04:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:16.310 01:04:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:16.310 01:04:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:16.310 01:04:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:16.310 01:04:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:16.310 01:04:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:16.310 01:04:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:16.310 01:04:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:16.310 01:04:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:16.310 01:04:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:16.310 01:04:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:16.310 01:04:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:16.310 01:04:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:16.310 01:04:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:16.310 01:04:49 -- common/autotest_common.sh@1543 -- # continue 00:05:16.310 01:04:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:16.310 01:04:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:16.310 01:04:49 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:16.310 01:04:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:16.310 01:04:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:16.310 01:04:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:16.310 01:04:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:16.310 01:04:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:16.310 01:04:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:16.310 01:04:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:16.310 01:04:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:16.310 01:04:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:16.310 01:04:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:16.310 01:04:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:16.310 01:04:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:16.310 01:04:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:16.310 01:04:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:16.310 01:04:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:16.310 01:04:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:16.310 01:04:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:16.310 01:04:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:16.310 01:04:49 -- common/autotest_common.sh@1543 -- # continue 00:05:16.310 01:04:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:16.310 01:04:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:16.310 01:04:49 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:16.310 01:04:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:16.310 01:04:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:16.310 01:04:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:16.310 01:04:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:16.310 01:04:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:16.310 01:04:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:16.310 01:04:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:16.310 01:04:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:16.310 01:04:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:16.310 01:04:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:16.310 01:04:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:16.310 01:04:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:16.310 01:04:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:16.310 01:04:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:16.310 01:04:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:16.310 01:04:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:16.311 01:04:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:16.311 01:04:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:16.311 01:04:49 -- common/autotest_common.sh@1543 -- # continue 00:05:16.311 01:04:49 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:16.311 01:04:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.311 01:04:49 -- common/autotest_common.sh@10 -- # set +x 00:05:16.311 01:04:49 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:16.311 01:04:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.311 01:04:49 -- common/autotest_common.sh@10 -- # set +x 00:05:16.311 01:04:49 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:16.878 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.444 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.444 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.444 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.444 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.444 01:04:50 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:17.444 01:04:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:17.444 01:04:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.444 01:04:50 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:17.444 01:04:50 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:17.444 01:04:50 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:17.444 01:04:50 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:17.444 01:04:50 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:17.444 01:04:50 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:17.444 01:04:50 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:17.444 01:04:50 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:17.444 01:04:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:17.444 01:04:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:17.444 01:04:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:17.444 01:04:50 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:17.444 01:04:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:17.444 01:04:51 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:17.444 01:04:51 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:17.444 01:04:51 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:17.444 01:04:51 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:17.444 01:04:51 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:17.444 01:04:51 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:17.444 01:04:51 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:17.444 01:04:51 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:17.444 01:04:51 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:17.444 01:04:51 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:17.444 01:04:51 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:17.444 01:04:51 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:17.444 01:04:51 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:17.444 01:04:51 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:17.444 01:04:51 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:17.444 01:04:51 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:17.444 01:04:51 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:17.444 01:04:51 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:17.444 01:04:51 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:17.444 01:04:51 -- common/autotest_common.sh@1572 -- # return 0 00:05:17.444 01:04:51 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:17.444 01:04:51 -- common/autotest_common.sh@1580 -- # return 0 00:05:17.444 01:04:51 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:17.444 01:04:51 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:17.444 01:04:51 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:17.444 01:04:51 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:17.444 01:04:51 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:17.444 01:04:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.444 01:04:51 -- common/autotest_common.sh@10 -- # set +x 00:05:17.444 01:04:51 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:17.444 01:04:51 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:17.444 01:04:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.444 01:04:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.444 01:04:51 -- common/autotest_common.sh@10 -- # set +x 00:05:17.703 ************************************ 00:05:17.703 START TEST env 00:05:17.703 ************************************ 00:05:17.703 01:04:51 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:17.703 * Looking for test storage... 00:05:17.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:17.703 01:04:51 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:17.703 01:04:51 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:17.703 01:04:51 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:17.703 01:04:51 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:17.703 01:04:51 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.703 01:04:51 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.703 01:04:51 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.703 01:04:51 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.703 01:04:51 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.703 01:04:51 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.703 01:04:51 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.703 01:04:51 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.703 01:04:51 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.703 01:04:51 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.703 01:04:51 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.703 01:04:51 env -- scripts/common.sh@344 -- # case "$op" in 00:05:17.703 01:04:51 env -- scripts/common.sh@345 -- # : 1 00:05:17.703 01:04:51 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.703 01:04:51 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.703 01:04:51 env -- scripts/common.sh@365 -- # decimal 1 00:05:17.703 01:04:51 env -- scripts/common.sh@353 -- # local d=1 00:05:17.704 01:04:51 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.704 01:04:51 env -- scripts/common.sh@355 -- # echo 1 00:05:17.704 01:04:51 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.704 01:04:51 env -- scripts/common.sh@366 -- # decimal 2 00:05:17.704 01:04:51 env -- scripts/common.sh@353 -- # local d=2 00:05:17.704 01:04:51 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.704 01:04:51 env -- scripts/common.sh@355 -- # echo 2 00:05:17.704 01:04:51 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.704 01:04:51 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.704 01:04:51 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.704 01:04:51 env -- scripts/common.sh@368 -- # return 0 00:05:17.704 01:04:51 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.704 01:04:51 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:17.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.704 --rc genhtml_branch_coverage=1 00:05:17.704 --rc genhtml_function_coverage=1 00:05:17.704 --rc genhtml_legend=1 00:05:17.704 --rc geninfo_all_blocks=1 00:05:17.704 --rc geninfo_unexecuted_blocks=1 00:05:17.704 00:05:17.704 ' 00:05:17.704 01:04:51 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:17.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.704 --rc genhtml_branch_coverage=1 00:05:17.704 --rc genhtml_function_coverage=1 00:05:17.704 --rc genhtml_legend=1 00:05:17.704 --rc geninfo_all_blocks=1 00:05:17.704 --rc geninfo_unexecuted_blocks=1 00:05:17.704 00:05:17.704 ' 00:05:17.704 01:04:51 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:17.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.704 --rc genhtml_branch_coverage=1 00:05:17.704 --rc genhtml_function_coverage=1 00:05:17.704 --rc genhtml_legend=1 00:05:17.704 --rc geninfo_all_blocks=1 00:05:17.704 --rc geninfo_unexecuted_blocks=1 00:05:17.704 00:05:17.704 ' 00:05:17.704 01:04:51 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:17.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.704 --rc genhtml_branch_coverage=1 00:05:17.704 --rc genhtml_function_coverage=1 00:05:17.704 --rc genhtml_legend=1 00:05:17.704 --rc geninfo_all_blocks=1 00:05:17.704 --rc geninfo_unexecuted_blocks=1 00:05:17.704 00:05:17.704 ' 00:05:17.704 01:04:51 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:17.704 01:04:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.704 01:04:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.704 01:04:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:17.704 ************************************ 00:05:17.704 START TEST env_memory 00:05:17.704 ************************************ 00:05:17.704 01:04:51 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:17.704 00:05:17.704 00:05:17.704 CUnit - A unit testing framework for C - Version 2.1-3 00:05:17.704 http://cunit.sourceforge.net/ 00:05:17.704 00:05:17.704 00:05:17.704 Suite: memory 00:05:17.704 Test: alloc and free memory map ...[2024-12-14 01:04:51.281952] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:17.704 passed 00:05:17.963 Test: mem map translation ...[2024-12-14 01:04:51.320771] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:17.963 [2024-12-14 01:04:51.320885] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:17.963 [2024-12-14 01:04:51.321004] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:17.963 [2024-12-14 01:04:51.321043] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:17.963 passed 00:05:17.963 Test: mem map registration ...[2024-12-14 01:04:51.389299] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:17.963 [2024-12-14 01:04:51.389402] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:17.963 passed 00:05:17.963 Test: mem map adjacent registrations ...passed 00:05:17.963 00:05:17.963 Run Summary: Type Total Ran Passed Failed Inactive 00:05:17.963 suites 1 1 n/a 0 0 00:05:17.963 tests 4 4 4 0 0 00:05:17.963 asserts 152 152 152 0 n/a 00:05:17.963 00:05:17.963 Elapsed time = 0.233 seconds 00:05:17.963 00:05:17.963 real 0m0.271s 00:05:17.963 user 0m0.244s 00:05:17.963 sys 0m0.018s 00:05:17.963 01:04:51 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.963 ************************************ 00:05:17.963 END TEST env_memory 00:05:17.963 ************************************ 00:05:17.963 01:04:51 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:17.963 01:04:51 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:17.963 01:04:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.963 01:04:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.963 01:04:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:17.963 ************************************ 00:05:17.963 START TEST env_vtophys 00:05:17.963 ************************************ 00:05:17.963 01:04:51 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:18.221 EAL: lib.eal log level changed from notice to debug 00:05:18.221 EAL: Detected lcore 0 as core 0 on socket 0 00:05:18.221 EAL: Detected lcore 1 as core 0 on socket 0 00:05:18.221 EAL: Detected lcore 2 as core 0 on socket 0 00:05:18.221 EAL: Detected lcore 3 as core 0 on socket 0 00:05:18.221 EAL: Detected lcore 4 as core 0 on socket 0 00:05:18.221 EAL: Detected lcore 5 as core 0 on socket 0 00:05:18.221 EAL: Detected lcore 6 as core 0 on socket 0 00:05:18.221 EAL: Detected lcore 7 as core 0 on socket 0 00:05:18.221 EAL: Detected lcore 8 as core 0 on socket 0 00:05:18.221 EAL: Detected lcore 9 as core 0 on socket 0 00:05:18.221 EAL: Maximum logical cores by configuration: 128 00:05:18.221 EAL: Detected CPU lcores: 10 00:05:18.221 EAL: Detected NUMA nodes: 1 00:05:18.221 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:18.221 EAL: Detected shared linkage of DPDK 00:05:18.221 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:18.221 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:18.221 EAL: Registered [vdev] bus. 00:05:18.221 EAL: bus.vdev log level changed from disabled to notice 00:05:18.221 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:18.221 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:18.221 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:18.221 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:18.221 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:18.221 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:18.221 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:18.221 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:18.221 EAL: No shared files mode enabled, IPC will be disabled 00:05:18.221 EAL: No shared files mode enabled, IPC is disabled 00:05:18.221 EAL: Selected IOVA mode 'PA' 00:05:18.221 EAL: Probing VFIO support... 00:05:18.221 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:18.221 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:18.221 EAL: Ask a virtual area of 0x2e000 bytes 00:05:18.221 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:18.221 EAL: Setting up physically contiguous memory... 00:05:18.221 EAL: Setting maximum number of open files to 524288 00:05:18.221 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:18.221 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:18.222 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.222 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:18.222 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.222 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.222 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:18.222 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:18.222 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.222 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:18.222 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.222 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.222 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:18.222 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:18.222 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.222 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:18.222 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.222 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.222 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:18.222 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:18.222 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.222 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:18.222 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.222 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.222 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:18.222 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:18.222 EAL: Hugepages will be freed exactly as allocated. 00:05:18.222 EAL: No shared files mode enabled, IPC is disabled 00:05:18.222 EAL: No shared files mode enabled, IPC is disabled 00:05:18.222 EAL: TSC frequency is ~2600000 KHz 00:05:18.222 EAL: Main lcore 0 is ready (tid=7f97812bfa40;cpuset=[0]) 00:05:18.222 EAL: Trying to obtain current memory policy. 00:05:18.222 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.222 EAL: Restoring previous memory policy: 0 00:05:18.222 EAL: request: mp_malloc_sync 00:05:18.222 EAL: No shared files mode enabled, IPC is disabled 00:05:18.222 EAL: Heap on socket 0 was expanded by 2MB 00:05:18.222 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:18.222 EAL: No shared files mode enabled, IPC is disabled 00:05:18.222 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:18.222 EAL: Mem event callback 'spdk:(nil)' registered 00:05:18.222 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:18.222 00:05:18.222 00:05:18.222 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.222 http://cunit.sourceforge.net/ 00:05:18.222 00:05:18.222 00:05:18.222 Suite: components_suite 00:05:18.480 Test: vtophys_malloc_test ...passed 00:05:18.480 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:18.480 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.480 EAL: Restoring previous memory policy: 4 00:05:18.480 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.480 EAL: request: mp_malloc_sync 00:05:18.480 EAL: No shared files mode enabled, IPC is disabled 00:05:18.480 EAL: Heap on socket 0 was expanded by 4MB 00:05:18.480 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.480 EAL: request: mp_malloc_sync 00:05:18.480 EAL: No shared files mode enabled, IPC is disabled 00:05:18.480 EAL: Heap on socket 0 was shrunk by 4MB 00:05:18.480 EAL: Trying to obtain current memory policy. 00:05:18.480 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.480 EAL: Restoring previous memory policy: 4 00:05:18.480 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.480 EAL: request: mp_malloc_sync 00:05:18.480 EAL: No shared files mode enabled, IPC is disabled 00:05:18.480 EAL: Heap on socket 0 was expanded by 6MB 00:05:18.480 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.480 EAL: request: mp_malloc_sync 00:05:18.480 EAL: No shared files mode enabled, IPC is disabled 00:05:18.480 EAL: Heap on socket 0 was shrunk by 6MB 00:05:18.480 EAL: Trying to obtain current memory policy. 00:05:18.480 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.480 EAL: Restoring previous memory policy: 4 00:05:18.480 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.480 EAL: request: mp_malloc_sync 00:05:18.480 EAL: No shared files mode enabled, IPC is disabled 00:05:18.480 EAL: Heap on socket 0 was expanded by 10MB 00:05:18.480 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.480 EAL: request: mp_malloc_sync 00:05:18.480 EAL: No shared files mode enabled, IPC is disabled 00:05:18.480 EAL: Heap on socket 0 was shrunk by 10MB 00:05:18.480 EAL: Trying to obtain current memory policy. 00:05:18.480 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.480 EAL: Restoring previous memory policy: 4 00:05:18.480 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.480 EAL: request: mp_malloc_sync 00:05:18.480 EAL: No shared files mode enabled, IPC is disabled 00:05:18.480 EAL: Heap on socket 0 was expanded by 18MB 00:05:18.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.481 EAL: request: mp_malloc_sync 00:05:18.481 EAL: No shared files mode enabled, IPC is disabled 00:05:18.481 EAL: Heap on socket 0 was shrunk by 18MB 00:05:18.481 EAL: Trying to obtain current memory policy. 00:05:18.481 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.481 EAL: Restoring previous memory policy: 4 00:05:18.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.481 EAL: request: mp_malloc_sync 00:05:18.481 EAL: No shared files mode enabled, IPC is disabled 00:05:18.481 EAL: Heap on socket 0 was expanded by 34MB 00:05:18.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.481 EAL: request: mp_malloc_sync 00:05:18.481 EAL: No shared files mode enabled, IPC is disabled 00:05:18.481 EAL: Heap on socket 0 was shrunk by 34MB 00:05:18.481 EAL: Trying to obtain current memory policy. 00:05:18.481 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.481 EAL: Restoring previous memory policy: 4 00:05:18.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.481 EAL: request: mp_malloc_sync 00:05:18.481 EAL: No shared files mode enabled, IPC is disabled 00:05:18.481 EAL: Heap on socket 0 was expanded by 66MB 00:05:18.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.481 EAL: request: mp_malloc_sync 00:05:18.481 EAL: No shared files mode enabled, IPC is disabled 00:05:18.481 EAL: Heap on socket 0 was shrunk by 66MB 00:05:18.481 EAL: Trying to obtain current memory policy. 00:05:18.481 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.481 EAL: Restoring previous memory policy: 4 00:05:18.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.481 EAL: request: mp_malloc_sync 00:05:18.481 EAL: No shared files mode enabled, IPC is disabled 00:05:18.481 EAL: Heap on socket 0 was expanded by 130MB 00:05:18.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.739 EAL: request: mp_malloc_sync 00:05:18.739 EAL: No shared files mode enabled, IPC is disabled 00:05:18.739 EAL: Heap on socket 0 was shrunk by 130MB 00:05:18.739 EAL: Trying to obtain current memory policy. 00:05:18.739 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.739 EAL: Restoring previous memory policy: 4 00:05:18.739 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.739 EAL: request: mp_malloc_sync 00:05:18.739 EAL: No shared files mode enabled, IPC is disabled 00:05:18.739 EAL: Heap on socket 0 was expanded by 258MB 00:05:18.739 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.739 EAL: request: mp_malloc_sync 00:05:18.739 EAL: No shared files mode enabled, IPC is disabled 00:05:18.739 EAL: Heap on socket 0 was shrunk by 258MB 00:05:18.739 EAL: Trying to obtain current memory policy. 00:05:18.739 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.739 EAL: Restoring previous memory policy: 4 00:05:18.739 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.739 EAL: request: mp_malloc_sync 00:05:18.739 EAL: No shared files mode enabled, IPC is disabled 00:05:18.739 EAL: Heap on socket 0 was expanded by 514MB 00:05:18.739 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.998 EAL: request: mp_malloc_sync 00:05:18.998 EAL: No shared files mode enabled, IPC is disabled 00:05:18.998 EAL: Heap on socket 0 was shrunk by 514MB 00:05:18.998 EAL: Trying to obtain current memory policy. 00:05:18.998 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.998 EAL: Restoring previous memory policy: 4 00:05:18.998 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.998 EAL: request: mp_malloc_sync 00:05:18.998 EAL: No shared files mode enabled, IPC is disabled 00:05:18.998 EAL: Heap on socket 0 was expanded by 1026MB 00:05:19.256 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.256 passed 00:05:19.256 00:05:19.256 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.256 suites 1 1 n/a 0 0 00:05:19.256 tests 2 2 2 0 0 00:05:19.256 asserts 5316 5316 5316 0 n/a 00:05:19.256 00:05:19.256 Elapsed time = 0.966 seconds 00:05:19.256 EAL: request: mp_malloc_sync 00:05:19.256 EAL: No shared files mode enabled, IPC is disabled 00:05:19.256 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:19.256 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.256 EAL: request: mp_malloc_sync 00:05:19.256 EAL: No shared files mode enabled, IPC is disabled 00:05:19.256 EAL: Heap on socket 0 was shrunk by 2MB 00:05:19.256 EAL: No shared files mode enabled, IPC is disabled 00:05:19.256 EAL: No shared files mode enabled, IPC is disabled 00:05:19.256 EAL: No shared files mode enabled, IPC is disabled 00:05:19.256 00:05:19.256 real 0m1.176s 00:05:19.256 user 0m0.477s 00:05:19.256 sys 0m0.571s 00:05:19.256 ************************************ 00:05:19.256 END TEST env_vtophys 00:05:19.256 ************************************ 00:05:19.256 01:04:52 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.256 01:04:52 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:19.256 01:04:52 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:19.256 01:04:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.256 01:04:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.256 01:04:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.256 ************************************ 00:05:19.256 START TEST env_pci 00:05:19.256 ************************************ 00:05:19.256 01:04:52 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:19.256 00:05:19.256 00:05:19.256 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.256 http://cunit.sourceforge.net/ 00:05:19.256 00:05:19.256 00:05:19.256 Suite: pci 00:05:19.256 Test: pci_hook ...[2024-12-14 01:04:52.794095] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70708 has claimed it 00:05:19.256 passed 00:05:19.256 00:05:19.256 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.257 suites 1 1 n/a 0 0 00:05:19.257 tests 1 1 1 0 0 00:05:19.257 asserts 25 25 25 0 n/a 00:05:19.257 00:05:19.257 Elapsed time = 0.006 seconds 00:05:19.257 EAL: Cannot find device (10000:00:01.0) 00:05:19.257 EAL: Failed to attach device on primary process 00:05:19.257 00:05:19.257 real 0m0.048s 00:05:19.257 user 0m0.021s 00:05:19.257 sys 0m0.026s 00:05:19.257 01:04:52 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.257 01:04:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:19.257 ************************************ 00:05:19.257 END TEST env_pci 00:05:19.257 ************************************ 00:05:19.257 01:04:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:19.257 01:04:52 env -- env/env.sh@15 -- # uname 00:05:19.257 01:04:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:19.257 01:04:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:19.257 01:04:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:19.257 01:04:52 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:19.257 01:04:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.257 01:04:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.257 ************************************ 00:05:19.257 START TEST env_dpdk_post_init 00:05:19.257 ************************************ 00:05:19.257 01:04:52 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:19.515 EAL: Detected CPU lcores: 10 00:05:19.515 EAL: Detected NUMA nodes: 1 00:05:19.515 EAL: Detected shared linkage of DPDK 00:05:19.515 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:19.515 EAL: Selected IOVA mode 'PA' 00:05:19.515 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:19.515 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:19.515 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:19.515 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:19.515 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:19.515 Starting DPDK initialization... 00:05:19.515 Starting SPDK post initialization... 00:05:19.515 SPDK NVMe probe 00:05:19.515 Attaching to 0000:00:10.0 00:05:19.515 Attaching to 0000:00:11.0 00:05:19.515 Attaching to 0000:00:12.0 00:05:19.515 Attaching to 0000:00:13.0 00:05:19.515 Attached to 0000:00:10.0 00:05:19.515 Attached to 0000:00:11.0 00:05:19.515 Attached to 0000:00:13.0 00:05:19.515 Attached to 0000:00:12.0 00:05:19.515 Cleaning up... 00:05:19.515 00:05:19.515 real 0m0.211s 00:05:19.515 user 0m0.061s 00:05:19.515 sys 0m0.053s 00:05:19.515 01:04:53 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.515 ************************************ 00:05:19.515 01:04:53 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:19.515 END TEST env_dpdk_post_init 00:05:19.515 ************************************ 00:05:19.515 01:04:53 env -- env/env.sh@26 -- # uname 00:05:19.515 01:04:53 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:19.515 01:04:53 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:19.515 01:04:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.515 01:04:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.515 01:04:53 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.515 ************************************ 00:05:19.515 START TEST env_mem_callbacks 00:05:19.515 ************************************ 00:05:19.515 01:04:53 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:19.773 EAL: Detected CPU lcores: 10 00:05:19.773 EAL: Detected NUMA nodes: 1 00:05:19.773 EAL: Detected shared linkage of DPDK 00:05:19.773 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:19.773 EAL: Selected IOVA mode 'PA' 00:05:19.773 00:05:19.773 00:05:19.773 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.773 http://cunit.sourceforge.net/ 00:05:19.773 00:05:19.773 00:05:19.773 Suite: memory 00:05:19.773 Test: test ... 00:05:19.773 register 0x200000200000 2097152 00:05:19.773 malloc 3145728 00:05:19.773 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:19.774 register 0x200000400000 4194304 00:05:19.774 buf 0x200000500000 len 3145728 PASSED 00:05:19.774 malloc 64 00:05:19.774 buf 0x2000004fff40 len 64 PASSED 00:05:19.774 malloc 4194304 00:05:19.774 register 0x200000800000 6291456 00:05:19.774 buf 0x200000a00000 len 4194304 PASSED 00:05:19.774 free 0x200000500000 3145728 00:05:19.774 free 0x2000004fff40 64 00:05:19.774 unregister 0x200000400000 4194304 PASSED 00:05:19.774 free 0x200000a00000 4194304 00:05:19.774 unregister 0x200000800000 6291456 PASSED 00:05:19.774 malloc 8388608 00:05:19.774 register 0x200000400000 10485760 00:05:19.774 buf 0x200000600000 len 8388608 PASSED 00:05:19.774 free 0x200000600000 8388608 00:05:19.774 unregister 0x200000400000 10485760 PASSED 00:05:19.774 passed 00:05:19.774 00:05:19.774 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.774 suites 1 1 n/a 0 0 00:05:19.774 tests 1 1 1 0 0 00:05:19.774 asserts 15 15 15 0 n/a 00:05:19.774 00:05:19.774 Elapsed time = 0.009 seconds 00:05:19.774 00:05:19.774 real 0m0.153s 00:05:19.774 user 0m0.023s 00:05:19.774 sys 0m0.028s 00:05:19.774 01:04:53 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.774 01:04:53 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:19.774 ************************************ 00:05:19.774 END TEST env_mem_callbacks 00:05:19.774 ************************************ 00:05:19.774 ************************************ 00:05:19.774 END TEST env 00:05:19.774 ************************************ 00:05:19.774 00:05:19.774 real 0m2.236s 00:05:19.774 user 0m0.967s 00:05:19.774 sys 0m0.900s 00:05:19.774 01:04:53 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.774 01:04:53 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.774 01:04:53 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:19.774 01:04:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.774 01:04:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.774 01:04:53 -- common/autotest_common.sh@10 -- # set +x 00:05:19.774 ************************************ 00:05:19.774 START TEST rpc 00:05:19.774 ************************************ 00:05:19.774 01:04:53 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:20.032 * Looking for test storage... 00:05:20.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:20.032 01:04:53 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:20.032 01:04:53 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:20.032 01:04:53 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:20.032 01:04:53 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:20.032 01:04:53 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.032 01:04:53 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.032 01:04:53 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.032 01:04:53 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.032 01:04:53 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.032 01:04:53 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.032 01:04:53 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.032 01:04:53 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.032 01:04:53 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.032 01:04:53 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.032 01:04:53 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.032 01:04:53 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:20.032 01:04:53 rpc -- scripts/common.sh@345 -- # : 1 00:05:20.032 01:04:53 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.032 01:04:53 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.032 01:04:53 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:20.032 01:04:53 rpc -- scripts/common.sh@353 -- # local d=1 00:05:20.032 01:04:53 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.032 01:04:53 rpc -- scripts/common.sh@355 -- # echo 1 00:05:20.032 01:04:53 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.032 01:04:53 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:20.032 01:04:53 rpc -- scripts/common.sh@353 -- # local d=2 00:05:20.032 01:04:53 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.032 01:04:53 rpc -- scripts/common.sh@355 -- # echo 2 00:05:20.032 01:04:53 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.032 01:04:53 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.032 01:04:53 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.032 01:04:53 rpc -- scripts/common.sh@368 -- # return 0 00:05:20.032 01:04:53 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.032 01:04:53 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:20.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.032 --rc genhtml_branch_coverage=1 00:05:20.032 --rc genhtml_function_coverage=1 00:05:20.032 --rc genhtml_legend=1 00:05:20.032 --rc geninfo_all_blocks=1 00:05:20.032 --rc geninfo_unexecuted_blocks=1 00:05:20.032 00:05:20.032 ' 00:05:20.033 01:04:53 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:20.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.033 --rc genhtml_branch_coverage=1 00:05:20.033 --rc genhtml_function_coverage=1 00:05:20.033 --rc genhtml_legend=1 00:05:20.033 --rc geninfo_all_blocks=1 00:05:20.033 --rc geninfo_unexecuted_blocks=1 00:05:20.033 00:05:20.033 ' 00:05:20.033 01:04:53 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:20.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.033 --rc genhtml_branch_coverage=1 00:05:20.033 --rc genhtml_function_coverage=1 00:05:20.033 --rc genhtml_legend=1 00:05:20.033 --rc geninfo_all_blocks=1 00:05:20.033 --rc geninfo_unexecuted_blocks=1 00:05:20.033 00:05:20.033 ' 00:05:20.033 01:04:53 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:20.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.033 --rc genhtml_branch_coverage=1 00:05:20.033 --rc genhtml_function_coverage=1 00:05:20.033 --rc genhtml_legend=1 00:05:20.033 --rc geninfo_all_blocks=1 00:05:20.033 --rc geninfo_unexecuted_blocks=1 00:05:20.033 00:05:20.033 ' 00:05:20.033 01:04:53 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70830 00:05:20.033 01:04:53 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.033 01:04:53 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:20.033 01:04:53 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70830 00:05:20.033 01:04:53 rpc -- common/autotest_common.sh@835 -- # '[' -z 70830 ']' 00:05:20.033 01:04:53 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.033 01:04:53 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.033 01:04:53 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.033 01:04:53 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.033 01:04:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.033 [2024-12-14 01:04:53.536417] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:20.033 [2024-12-14 01:04:53.536535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70830 ] 00:05:20.291 [2024-12-14 01:04:53.681954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.291 [2024-12-14 01:04:53.701262] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:20.291 [2024-12-14 01:04:53.701302] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70830' to capture a snapshot of events at runtime. 00:05:20.291 [2024-12-14 01:04:53.701313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:20.291 [2024-12-14 01:04:53.701324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:20.291 [2024-12-14 01:04:53.701334] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70830 for offline analysis/debug. 00:05:20.291 [2024-12-14 01:04:53.701648] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.858 01:04:54 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.858 01:04:54 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:20.858 01:04:54 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:20.858 01:04:54 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:20.858 01:04:54 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:20.858 01:04:54 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:20.858 01:04:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.858 01:04:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.858 01:04:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.858 ************************************ 00:05:20.858 START TEST rpc_integrity 00:05:20.858 ************************************ 00:05:20.858 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:20.858 01:04:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:20.858 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.858 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.858 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.858 01:04:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:20.858 01:04:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:20.858 01:04:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:20.858 01:04:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:20.858 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.858 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.858 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.858 01:04:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:20.858 01:04:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:20.858 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.858 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.858 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.858 01:04:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:20.858 { 00:05:20.858 "name": "Malloc0", 00:05:20.858 "aliases": [ 00:05:20.858 "4c902370-acd2-417d-baff-309cb66a95a1" 00:05:20.858 ], 00:05:20.858 "product_name": "Malloc disk", 00:05:20.858 "block_size": 512, 00:05:20.858 "num_blocks": 16384, 00:05:20.858 "uuid": "4c902370-acd2-417d-baff-309cb66a95a1", 00:05:20.858 "assigned_rate_limits": { 00:05:20.858 "rw_ios_per_sec": 0, 00:05:20.858 "rw_mbytes_per_sec": 0, 00:05:20.858 "r_mbytes_per_sec": 0, 00:05:20.858 "w_mbytes_per_sec": 0 00:05:20.858 }, 00:05:20.858 "claimed": false, 00:05:20.858 "zoned": false, 00:05:20.859 "supported_io_types": { 00:05:20.859 "read": true, 00:05:20.859 "write": true, 00:05:20.859 "unmap": true, 00:05:20.859 "flush": true, 00:05:20.859 "reset": true, 00:05:20.859 "nvme_admin": false, 00:05:20.859 "nvme_io": false, 00:05:20.859 "nvme_io_md": false, 00:05:20.859 "write_zeroes": true, 00:05:20.859 "zcopy": true, 00:05:20.859 "get_zone_info": false, 00:05:20.859 "zone_management": false, 00:05:20.859 "zone_append": false, 00:05:20.859 "compare": false, 00:05:20.859 "compare_and_write": false, 00:05:20.859 "abort": true, 00:05:20.859 "seek_hole": false, 00:05:20.859 "seek_data": false, 00:05:20.859 "copy": true, 00:05:20.859 "nvme_iov_md": false 00:05:20.859 }, 00:05:20.859 "memory_domains": [ 00:05:20.859 { 00:05:20.859 "dma_device_id": "system", 00:05:20.859 "dma_device_type": 1 00:05:20.859 }, 00:05:20.859 { 00:05:20.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.859 "dma_device_type": 2 00:05:20.859 } 00:05:20.859 ], 00:05:20.859 "driver_specific": {} 00:05:20.859 } 00:05:20.859 ]' 00:05:20.859 01:04:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:21.117 01:04:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:21.117 01:04:54 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:21.117 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.117 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.117 [2024-12-14 01:04:54.483845] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:21.117 [2024-12-14 01:04:54.483901] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:21.117 [2024-12-14 01:04:54.483923] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:21.117 [2024-12-14 01:04:54.483933] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:21.117 [2024-12-14 01:04:54.486106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:21.117 [2024-12-14 01:04:54.486141] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:21.117 Passthru0 00:05:21.117 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.117 01:04:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:21.117 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.117 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.117 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.117 01:04:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:21.117 { 00:05:21.117 "name": "Malloc0", 00:05:21.117 "aliases": [ 00:05:21.117 "4c902370-acd2-417d-baff-309cb66a95a1" 00:05:21.117 ], 00:05:21.117 "product_name": "Malloc disk", 00:05:21.117 "block_size": 512, 00:05:21.117 "num_blocks": 16384, 00:05:21.117 "uuid": "4c902370-acd2-417d-baff-309cb66a95a1", 00:05:21.117 "assigned_rate_limits": { 00:05:21.117 "rw_ios_per_sec": 0, 00:05:21.117 "rw_mbytes_per_sec": 0, 00:05:21.117 "r_mbytes_per_sec": 0, 00:05:21.117 "w_mbytes_per_sec": 0 00:05:21.117 }, 00:05:21.117 "claimed": true, 00:05:21.117 "claim_type": "exclusive_write", 00:05:21.117 "zoned": false, 00:05:21.117 "supported_io_types": { 00:05:21.117 "read": true, 00:05:21.117 "write": true, 00:05:21.117 "unmap": true, 00:05:21.117 "flush": true, 00:05:21.117 "reset": true, 00:05:21.117 "nvme_admin": false, 00:05:21.117 "nvme_io": false, 00:05:21.117 "nvme_io_md": false, 00:05:21.117 "write_zeroes": true, 00:05:21.117 "zcopy": true, 00:05:21.117 "get_zone_info": false, 00:05:21.117 "zone_management": false, 00:05:21.117 "zone_append": false, 00:05:21.117 "compare": false, 00:05:21.117 "compare_and_write": false, 00:05:21.117 "abort": true, 00:05:21.117 "seek_hole": false, 00:05:21.117 "seek_data": false, 00:05:21.117 "copy": true, 00:05:21.117 "nvme_iov_md": false 00:05:21.117 }, 00:05:21.117 "memory_domains": [ 00:05:21.117 { 00:05:21.117 "dma_device_id": "system", 00:05:21.117 "dma_device_type": 1 00:05:21.117 }, 00:05:21.117 { 00:05:21.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.117 "dma_device_type": 2 00:05:21.117 } 00:05:21.117 ], 00:05:21.117 "driver_specific": {} 00:05:21.117 }, 00:05:21.117 { 00:05:21.117 "name": "Passthru0", 00:05:21.117 "aliases": [ 00:05:21.117 "38231d23-6ebe-554a-82e8-1c2db21f0d62" 00:05:21.117 ], 00:05:21.117 "product_name": "passthru", 00:05:21.117 "block_size": 512, 00:05:21.117 "num_blocks": 16384, 00:05:21.117 "uuid": "38231d23-6ebe-554a-82e8-1c2db21f0d62", 00:05:21.117 "assigned_rate_limits": { 00:05:21.117 "rw_ios_per_sec": 0, 00:05:21.117 "rw_mbytes_per_sec": 0, 00:05:21.117 "r_mbytes_per_sec": 0, 00:05:21.117 "w_mbytes_per_sec": 0 00:05:21.117 }, 00:05:21.118 "claimed": false, 00:05:21.118 "zoned": false, 00:05:21.118 "supported_io_types": { 00:05:21.118 "read": true, 00:05:21.118 "write": true, 00:05:21.118 "unmap": true, 00:05:21.118 "flush": true, 00:05:21.118 "reset": true, 00:05:21.118 "nvme_admin": false, 00:05:21.118 "nvme_io": false, 00:05:21.118 "nvme_io_md": false, 00:05:21.118 "write_zeroes": true, 00:05:21.118 "zcopy": true, 00:05:21.118 "get_zone_info": false, 00:05:21.118 "zone_management": false, 00:05:21.118 "zone_append": false, 00:05:21.118 "compare": false, 00:05:21.118 "compare_and_write": false, 00:05:21.118 "abort": true, 00:05:21.118 "seek_hole": false, 00:05:21.118 "seek_data": false, 00:05:21.118 "copy": true, 00:05:21.118 "nvme_iov_md": false 00:05:21.118 }, 00:05:21.118 "memory_domains": [ 00:05:21.118 { 00:05:21.118 "dma_device_id": "system", 00:05:21.118 "dma_device_type": 1 00:05:21.118 }, 00:05:21.118 { 00:05:21.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.118 "dma_device_type": 2 00:05:21.118 } 00:05:21.118 ], 00:05:21.118 "driver_specific": { 00:05:21.118 "passthru": { 00:05:21.118 "name": "Passthru0", 00:05:21.118 "base_bdev_name": "Malloc0" 00:05:21.118 } 00:05:21.118 } 00:05:21.118 } 00:05:21.118 ]' 00:05:21.118 01:04:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:21.118 01:04:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:21.118 01:04:54 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:21.118 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.118 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.118 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.118 01:04:54 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:21.118 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.118 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.118 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.118 01:04:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:21.118 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.118 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.118 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.118 01:04:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:21.118 01:04:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:21.118 01:04:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:21.118 00:05:21.118 real 0m0.220s 00:05:21.118 user 0m0.127s 00:05:21.118 sys 0m0.035s 00:05:21.118 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.118 01:04:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.118 ************************************ 00:05:21.118 END TEST rpc_integrity 00:05:21.118 ************************************ 00:05:21.118 01:04:54 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:21.118 01:04:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.118 01:04:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.118 01:04:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.118 ************************************ 00:05:21.118 START TEST rpc_plugins 00:05:21.118 ************************************ 00:05:21.118 01:04:54 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:21.118 01:04:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:21.118 01:04:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.118 01:04:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.118 01:04:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.118 01:04:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:21.118 01:04:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:21.118 01:04:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.118 01:04:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.118 01:04:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.118 01:04:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:21.118 { 00:05:21.118 "name": "Malloc1", 00:05:21.118 "aliases": [ 00:05:21.118 "cca045e8-c20c-40a0-a1d1-a6276fb06c66" 00:05:21.118 ], 00:05:21.118 "product_name": "Malloc disk", 00:05:21.118 "block_size": 4096, 00:05:21.118 "num_blocks": 256, 00:05:21.118 "uuid": "cca045e8-c20c-40a0-a1d1-a6276fb06c66", 00:05:21.118 "assigned_rate_limits": { 00:05:21.118 "rw_ios_per_sec": 0, 00:05:21.118 "rw_mbytes_per_sec": 0, 00:05:21.118 "r_mbytes_per_sec": 0, 00:05:21.118 "w_mbytes_per_sec": 0 00:05:21.118 }, 00:05:21.118 "claimed": false, 00:05:21.118 "zoned": false, 00:05:21.118 "supported_io_types": { 00:05:21.118 "read": true, 00:05:21.118 "write": true, 00:05:21.118 "unmap": true, 00:05:21.118 "flush": true, 00:05:21.118 "reset": true, 00:05:21.118 "nvme_admin": false, 00:05:21.118 "nvme_io": false, 00:05:21.118 "nvme_io_md": false, 00:05:21.118 "write_zeroes": true, 00:05:21.118 "zcopy": true, 00:05:21.118 "get_zone_info": false, 00:05:21.118 "zone_management": false, 00:05:21.118 "zone_append": false, 00:05:21.118 "compare": false, 00:05:21.118 "compare_and_write": false, 00:05:21.118 "abort": true, 00:05:21.118 "seek_hole": false, 00:05:21.118 "seek_data": false, 00:05:21.118 "copy": true, 00:05:21.118 "nvme_iov_md": false 00:05:21.118 }, 00:05:21.118 "memory_domains": [ 00:05:21.118 { 00:05:21.118 "dma_device_id": "system", 00:05:21.118 "dma_device_type": 1 00:05:21.118 }, 00:05:21.118 { 00:05:21.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.118 "dma_device_type": 2 00:05:21.118 } 00:05:21.118 ], 00:05:21.118 "driver_specific": {} 00:05:21.118 } 00:05:21.118 ]' 00:05:21.118 01:04:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:21.118 01:04:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:21.118 01:04:54 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:21.118 01:04:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.118 01:04:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.118 01:04:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.118 01:04:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:21.118 01:04:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.118 01:04:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.118 01:04:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.118 01:04:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:21.118 01:04:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:21.377 01:04:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:21.377 00:05:21.377 real 0m0.106s 00:05:21.377 user 0m0.059s 00:05:21.377 sys 0m0.015s 00:05:21.377 01:04:54 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.377 01:04:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.377 ************************************ 00:05:21.377 END TEST rpc_plugins 00:05:21.377 ************************************ 00:05:21.377 01:04:54 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:21.377 01:04:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.377 01:04:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.377 01:04:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.377 ************************************ 00:05:21.377 START TEST rpc_trace_cmd_test 00:05:21.377 ************************************ 00:05:21.377 01:04:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:21.377 01:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:21.377 01:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:21.377 01:04:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.377 01:04:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:21.377 01:04:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.377 01:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:21.377 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70830", 00:05:21.377 "tpoint_group_mask": "0x8", 00:05:21.377 "iscsi_conn": { 00:05:21.377 "mask": "0x2", 00:05:21.377 "tpoint_mask": "0x0" 00:05:21.377 }, 00:05:21.377 "scsi": { 00:05:21.377 "mask": "0x4", 00:05:21.377 "tpoint_mask": "0x0" 00:05:21.377 }, 00:05:21.377 "bdev": { 00:05:21.377 "mask": "0x8", 00:05:21.377 "tpoint_mask": "0xffffffffffffffff" 00:05:21.377 }, 00:05:21.377 "nvmf_rdma": { 00:05:21.377 "mask": "0x10", 00:05:21.377 "tpoint_mask": "0x0" 00:05:21.377 }, 00:05:21.377 "nvmf_tcp": { 00:05:21.377 "mask": "0x20", 00:05:21.377 "tpoint_mask": "0x0" 00:05:21.377 }, 00:05:21.377 "ftl": { 00:05:21.377 "mask": "0x40", 00:05:21.377 "tpoint_mask": "0x0" 00:05:21.377 }, 00:05:21.377 "blobfs": { 00:05:21.377 "mask": "0x80", 00:05:21.377 "tpoint_mask": "0x0" 00:05:21.377 }, 00:05:21.377 "dsa": { 00:05:21.377 "mask": "0x200", 00:05:21.377 "tpoint_mask": "0x0" 00:05:21.377 }, 00:05:21.377 "thread": { 00:05:21.377 "mask": "0x400", 00:05:21.377 "tpoint_mask": "0x0" 00:05:21.377 }, 00:05:21.377 "nvme_pcie": { 00:05:21.377 "mask": "0x800", 00:05:21.377 "tpoint_mask": "0x0" 00:05:21.377 }, 00:05:21.377 "iaa": { 00:05:21.377 "mask": "0x1000", 00:05:21.377 "tpoint_mask": "0x0" 00:05:21.377 }, 00:05:21.377 "nvme_tcp": { 00:05:21.377 "mask": "0x2000", 00:05:21.377 "tpoint_mask": "0x0" 00:05:21.377 }, 00:05:21.377 "bdev_nvme": { 00:05:21.377 "mask": "0x4000", 00:05:21.377 "tpoint_mask": "0x0" 00:05:21.377 }, 00:05:21.377 "sock": { 00:05:21.377 "mask": "0x8000", 00:05:21.377 "tpoint_mask": "0x0" 00:05:21.377 }, 00:05:21.377 "blob": { 00:05:21.377 "mask": "0x10000", 00:05:21.377 "tpoint_mask": "0x0" 00:05:21.377 }, 00:05:21.377 "bdev_raid": { 00:05:21.377 "mask": "0x20000", 00:05:21.377 "tpoint_mask": "0x0" 00:05:21.377 }, 00:05:21.377 "scheduler": { 00:05:21.377 "mask": "0x40000", 00:05:21.377 "tpoint_mask": "0x0" 00:05:21.377 } 00:05:21.377 }' 00:05:21.377 01:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:21.377 01:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:21.377 01:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:21.377 01:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:21.377 01:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:21.377 01:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:21.377 01:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:21.377 01:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:21.377 01:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:21.377 01:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:21.377 00:05:21.377 real 0m0.166s 00:05:21.377 user 0m0.135s 00:05:21.377 sys 0m0.024s 00:05:21.377 01:04:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.377 01:04:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:21.377 ************************************ 00:05:21.377 END TEST rpc_trace_cmd_test 00:05:21.377 ************************************ 00:05:21.377 01:04:54 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:21.377 01:04:54 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:21.377 01:04:54 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:21.377 01:04:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.377 01:04:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.377 01:04:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.377 ************************************ 00:05:21.377 START TEST rpc_daemon_integrity 00:05:21.377 ************************************ 00:05:21.377 01:04:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:21.635 01:04:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:21.635 01:04:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.635 01:04:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.635 01:04:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.635 01:04:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:21.635 01:04:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:21.635 01:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:21.635 01:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:21.635 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.635 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.635 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.635 01:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:21.635 01:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:21.635 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.635 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.635 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.635 01:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:21.635 { 00:05:21.635 "name": "Malloc2", 00:05:21.635 "aliases": [ 00:05:21.635 "974f5b04-514f-4dcb-bd2d-09bbb85aba4e" 00:05:21.635 ], 00:05:21.635 "product_name": "Malloc disk", 00:05:21.635 "block_size": 512, 00:05:21.635 "num_blocks": 16384, 00:05:21.635 "uuid": "974f5b04-514f-4dcb-bd2d-09bbb85aba4e", 00:05:21.635 "assigned_rate_limits": { 00:05:21.635 "rw_ios_per_sec": 0, 00:05:21.635 "rw_mbytes_per_sec": 0, 00:05:21.635 "r_mbytes_per_sec": 0, 00:05:21.635 "w_mbytes_per_sec": 0 00:05:21.635 }, 00:05:21.635 "claimed": false, 00:05:21.635 "zoned": false, 00:05:21.635 "supported_io_types": { 00:05:21.635 "read": true, 00:05:21.635 "write": true, 00:05:21.635 "unmap": true, 00:05:21.635 "flush": true, 00:05:21.635 "reset": true, 00:05:21.635 "nvme_admin": false, 00:05:21.635 "nvme_io": false, 00:05:21.635 "nvme_io_md": false, 00:05:21.635 "write_zeroes": true, 00:05:21.635 "zcopy": true, 00:05:21.635 "get_zone_info": false, 00:05:21.635 "zone_management": false, 00:05:21.635 "zone_append": false, 00:05:21.635 "compare": false, 00:05:21.635 "compare_and_write": false, 00:05:21.635 "abort": true, 00:05:21.635 "seek_hole": false, 00:05:21.635 "seek_data": false, 00:05:21.635 "copy": true, 00:05:21.635 "nvme_iov_md": false 00:05:21.635 }, 00:05:21.635 "memory_domains": [ 00:05:21.635 { 00:05:21.635 "dma_device_id": "system", 00:05:21.635 "dma_device_type": 1 00:05:21.635 }, 00:05:21.635 { 00:05:21.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.635 "dma_device_type": 2 00:05:21.635 } 00:05:21.635 ], 00:05:21.635 "driver_specific": {} 00:05:21.635 } 00:05:21.635 ]' 00:05:21.635 01:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:21.635 01:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:21.635 01:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:21.635 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.635 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.635 [2024-12-14 01:04:55.096207] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:21.635 [2024-12-14 01:04:55.096260] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:21.635 [2024-12-14 01:04:55.096283] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:21.635 [2024-12-14 01:04:55.096291] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:21.635 [2024-12-14 01:04:55.098438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:21.635 [2024-12-14 01:04:55.098471] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:21.635 Passthru0 00:05:21.635 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.635 01:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:21.635 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.635 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.635 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.635 01:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:21.635 { 00:05:21.635 "name": "Malloc2", 00:05:21.635 "aliases": [ 00:05:21.635 "974f5b04-514f-4dcb-bd2d-09bbb85aba4e" 00:05:21.635 ], 00:05:21.635 "product_name": "Malloc disk", 00:05:21.635 "block_size": 512, 00:05:21.635 "num_blocks": 16384, 00:05:21.635 "uuid": "974f5b04-514f-4dcb-bd2d-09bbb85aba4e", 00:05:21.635 "assigned_rate_limits": { 00:05:21.635 "rw_ios_per_sec": 0, 00:05:21.635 "rw_mbytes_per_sec": 0, 00:05:21.635 "r_mbytes_per_sec": 0, 00:05:21.635 "w_mbytes_per_sec": 0 00:05:21.635 }, 00:05:21.635 "claimed": true, 00:05:21.635 "claim_type": "exclusive_write", 00:05:21.635 "zoned": false, 00:05:21.635 "supported_io_types": { 00:05:21.635 "read": true, 00:05:21.635 "write": true, 00:05:21.635 "unmap": true, 00:05:21.635 "flush": true, 00:05:21.635 "reset": true, 00:05:21.635 "nvme_admin": false, 00:05:21.635 "nvme_io": false, 00:05:21.635 "nvme_io_md": false, 00:05:21.635 "write_zeroes": true, 00:05:21.635 "zcopy": true, 00:05:21.635 "get_zone_info": false, 00:05:21.635 "zone_management": false, 00:05:21.635 "zone_append": false, 00:05:21.635 "compare": false, 00:05:21.635 "compare_and_write": false, 00:05:21.635 "abort": true, 00:05:21.635 "seek_hole": false, 00:05:21.635 "seek_data": false, 00:05:21.635 "copy": true, 00:05:21.635 "nvme_iov_md": false 00:05:21.635 }, 00:05:21.635 "memory_domains": [ 00:05:21.635 { 00:05:21.635 "dma_device_id": "system", 00:05:21.635 "dma_device_type": 1 00:05:21.635 }, 00:05:21.635 { 00:05:21.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.635 "dma_device_type": 2 00:05:21.635 } 00:05:21.635 ], 00:05:21.635 "driver_specific": {} 00:05:21.636 }, 00:05:21.636 { 00:05:21.636 "name": "Passthru0", 00:05:21.636 "aliases": [ 00:05:21.636 "e27c4b38-1158-5343-8091-04f759c6f7bd" 00:05:21.636 ], 00:05:21.636 "product_name": "passthru", 00:05:21.636 "block_size": 512, 00:05:21.636 "num_blocks": 16384, 00:05:21.636 "uuid": "e27c4b38-1158-5343-8091-04f759c6f7bd", 00:05:21.636 "assigned_rate_limits": { 00:05:21.636 "rw_ios_per_sec": 0, 00:05:21.636 "rw_mbytes_per_sec": 0, 00:05:21.636 "r_mbytes_per_sec": 0, 00:05:21.636 "w_mbytes_per_sec": 0 00:05:21.636 }, 00:05:21.636 "claimed": false, 00:05:21.636 "zoned": false, 00:05:21.636 "supported_io_types": { 00:05:21.636 "read": true, 00:05:21.636 "write": true, 00:05:21.636 "unmap": true, 00:05:21.636 "flush": true, 00:05:21.636 "reset": true, 00:05:21.636 "nvme_admin": false, 00:05:21.636 "nvme_io": false, 00:05:21.636 "nvme_io_md": false, 00:05:21.636 "write_zeroes": true, 00:05:21.636 "zcopy": true, 00:05:21.636 "get_zone_info": false, 00:05:21.636 "zone_management": false, 00:05:21.636 "zone_append": false, 00:05:21.636 "compare": false, 00:05:21.636 "compare_and_write": false, 00:05:21.636 "abort": true, 00:05:21.636 "seek_hole": false, 00:05:21.636 "seek_data": false, 00:05:21.636 "copy": true, 00:05:21.636 "nvme_iov_md": false 00:05:21.636 }, 00:05:21.636 "memory_domains": [ 00:05:21.636 { 00:05:21.636 "dma_device_id": "system", 00:05:21.636 "dma_device_type": 1 00:05:21.636 }, 00:05:21.636 { 00:05:21.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.636 "dma_device_type": 2 00:05:21.636 } 00:05:21.636 ], 00:05:21.636 "driver_specific": { 00:05:21.636 "passthru": { 00:05:21.636 "name": "Passthru0", 00:05:21.636 "base_bdev_name": "Malloc2" 00:05:21.636 } 00:05:21.636 } 00:05:21.636 } 00:05:21.636 ]' 00:05:21.636 01:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:21.636 01:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:21.636 01:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:21.636 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.636 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.636 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.636 01:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:21.636 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.636 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.636 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.636 01:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:21.636 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.636 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.636 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.636 01:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:21.636 01:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:21.636 01:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:21.636 00:05:21.636 real 0m0.220s 00:05:21.636 user 0m0.131s 00:05:21.636 sys 0m0.027s 00:05:21.636 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.636 01:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.636 ************************************ 00:05:21.636 END TEST rpc_daemon_integrity 00:05:21.636 ************************************ 00:05:21.636 01:04:55 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:21.636 01:04:55 rpc -- rpc/rpc.sh@84 -- # killprocess 70830 00:05:21.636 01:04:55 rpc -- common/autotest_common.sh@954 -- # '[' -z 70830 ']' 00:05:21.636 01:04:55 rpc -- common/autotest_common.sh@958 -- # kill -0 70830 00:05:21.636 01:04:55 rpc -- common/autotest_common.sh@959 -- # uname 00:05:21.636 01:04:55 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.636 01:04:55 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70830 00:05:21.894 01:04:55 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.894 killing process with pid 70830 00:05:21.894 01:04:55 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.894 01:04:55 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70830' 00:05:21.894 01:04:55 rpc -- common/autotest_common.sh@973 -- # kill 70830 00:05:21.894 01:04:55 rpc -- common/autotest_common.sh@978 -- # wait 70830 00:05:21.894 ************************************ 00:05:21.894 END TEST rpc 00:05:21.894 ************************************ 00:05:21.894 00:05:21.894 real 0m2.165s 00:05:21.894 user 0m2.603s 00:05:21.894 sys 0m0.561s 00:05:21.894 01:04:55 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.894 01:04:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.152 01:04:55 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:22.152 01:04:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.152 01:04:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.152 01:04:55 -- common/autotest_common.sh@10 -- # set +x 00:05:22.152 ************************************ 00:05:22.152 START TEST skip_rpc 00:05:22.152 ************************************ 00:05:22.152 01:04:55 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:22.152 * Looking for test storage... 00:05:22.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:22.152 01:04:55 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:22.152 01:04:55 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:22.152 01:04:55 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:22.152 01:04:55 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.152 01:04:55 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:22.152 01:04:55 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.152 01:04:55 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:22.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.152 --rc genhtml_branch_coverage=1 00:05:22.152 --rc genhtml_function_coverage=1 00:05:22.152 --rc genhtml_legend=1 00:05:22.152 --rc geninfo_all_blocks=1 00:05:22.152 --rc geninfo_unexecuted_blocks=1 00:05:22.152 00:05:22.152 ' 00:05:22.152 01:04:55 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:22.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.152 --rc genhtml_branch_coverage=1 00:05:22.152 --rc genhtml_function_coverage=1 00:05:22.152 --rc genhtml_legend=1 00:05:22.152 --rc geninfo_all_blocks=1 00:05:22.152 --rc geninfo_unexecuted_blocks=1 00:05:22.152 00:05:22.152 ' 00:05:22.152 01:04:55 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:22.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.152 --rc genhtml_branch_coverage=1 00:05:22.152 --rc genhtml_function_coverage=1 00:05:22.152 --rc genhtml_legend=1 00:05:22.152 --rc geninfo_all_blocks=1 00:05:22.152 --rc geninfo_unexecuted_blocks=1 00:05:22.152 00:05:22.152 ' 00:05:22.152 01:04:55 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:22.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.152 --rc genhtml_branch_coverage=1 00:05:22.152 --rc genhtml_function_coverage=1 00:05:22.152 --rc genhtml_legend=1 00:05:22.152 --rc geninfo_all_blocks=1 00:05:22.152 --rc geninfo_unexecuted_blocks=1 00:05:22.152 00:05:22.152 ' 00:05:22.152 01:04:55 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:22.152 01:04:55 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:22.152 01:04:55 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:22.153 01:04:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.153 01:04:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.153 01:04:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.153 ************************************ 00:05:22.153 START TEST skip_rpc 00:05:22.153 ************************************ 00:05:22.153 01:04:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:22.153 01:04:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=71026 00:05:22.153 01:04:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.153 01:04:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:22.153 01:04:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:22.153 [2024-12-14 01:04:55.749106] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:22.153 [2024-12-14 01:04:55.749225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71026 ] 00:05:22.411 [2024-12-14 01:04:55.896279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.411 [2024-12-14 01:04:55.914514] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 71026 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 71026 ']' 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 71026 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71026 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.675 killing process with pid 71026 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71026' 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 71026 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 71026 00:05:27.675 00:05:27.675 real 0m5.253s 00:05:27.675 user 0m4.946s 00:05:27.675 sys 0m0.210s 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.675 01:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.675 ************************************ 00:05:27.675 END TEST skip_rpc 00:05:27.675 ************************************ 00:05:27.675 01:05:00 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:27.675 01:05:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.675 01:05:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.675 01:05:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.675 ************************************ 00:05:27.675 START TEST skip_rpc_with_json 00:05:27.675 ************************************ 00:05:27.675 01:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:27.675 01:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:27.675 01:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71113 00:05:27.675 01:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.675 01:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71113 00:05:27.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.675 01:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 71113 ']' 00:05:27.675 01:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.675 01:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.675 01:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.675 01:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.675 01:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:27.675 01:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.675 [2024-12-14 01:05:01.057459] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:27.675 [2024-12-14 01:05:01.057573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71113 ] 00:05:27.675 [2024-12-14 01:05:01.194025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.675 [2024-12-14 01:05:01.211210] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.241 01:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.241 01:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:28.241 01:05:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:28.241 01:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.241 01:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.241 [2024-12-14 01:05:01.843133] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:28.241 request: 00:05:28.241 { 00:05:28.241 "trtype": "tcp", 00:05:28.241 "method": "nvmf_get_transports", 00:05:28.241 "req_id": 1 00:05:28.241 } 00:05:28.241 Got JSON-RPC error response 00:05:28.241 response: 00:05:28.241 { 00:05:28.241 "code": -19, 00:05:28.241 "message": "No such device" 00:05:28.241 } 00:05:28.241 01:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:28.241 01:05:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:28.241 01:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.241 01:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.501 [2024-12-14 01:05:01.855231] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:28.501 01:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.501 01:05:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:28.501 01:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.501 01:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.501 01:05:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.501 01:05:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:28.501 { 00:05:28.501 "subsystems": [ 00:05:28.501 { 00:05:28.501 "subsystem": "fsdev", 00:05:28.501 "config": [ 00:05:28.501 { 00:05:28.501 "method": "fsdev_set_opts", 00:05:28.501 "params": { 00:05:28.501 "fsdev_io_pool_size": 65535, 00:05:28.501 "fsdev_io_cache_size": 256 00:05:28.501 } 00:05:28.501 } 00:05:28.501 ] 00:05:28.501 }, 00:05:28.501 { 00:05:28.501 "subsystem": "keyring", 00:05:28.501 "config": [] 00:05:28.501 }, 00:05:28.501 { 00:05:28.501 "subsystem": "iobuf", 00:05:28.501 "config": [ 00:05:28.501 { 00:05:28.501 "method": "iobuf_set_options", 00:05:28.501 "params": { 00:05:28.501 "small_pool_count": 8192, 00:05:28.501 "large_pool_count": 1024, 00:05:28.501 "small_bufsize": 8192, 00:05:28.501 "large_bufsize": 135168, 00:05:28.501 "enable_numa": false 00:05:28.501 } 00:05:28.501 } 00:05:28.501 ] 00:05:28.501 }, 00:05:28.501 { 00:05:28.501 "subsystem": "sock", 00:05:28.501 "config": [ 00:05:28.501 { 00:05:28.501 "method": "sock_set_default_impl", 00:05:28.501 "params": { 00:05:28.501 "impl_name": "posix" 00:05:28.501 } 00:05:28.501 }, 00:05:28.501 { 00:05:28.501 "method": "sock_impl_set_options", 00:05:28.501 "params": { 00:05:28.501 "impl_name": "ssl", 00:05:28.501 "recv_buf_size": 4096, 00:05:28.501 "send_buf_size": 4096, 00:05:28.501 "enable_recv_pipe": true, 00:05:28.501 "enable_quickack": false, 00:05:28.501 "enable_placement_id": 0, 00:05:28.501 "enable_zerocopy_send_server": true, 00:05:28.501 "enable_zerocopy_send_client": false, 00:05:28.501 "zerocopy_threshold": 0, 00:05:28.501 "tls_version": 0, 00:05:28.501 "enable_ktls": false 00:05:28.501 } 00:05:28.501 }, 00:05:28.501 { 00:05:28.501 "method": "sock_impl_set_options", 00:05:28.501 "params": { 00:05:28.501 "impl_name": "posix", 00:05:28.501 "recv_buf_size": 2097152, 00:05:28.501 "send_buf_size": 2097152, 00:05:28.501 "enable_recv_pipe": true, 00:05:28.501 "enable_quickack": false, 00:05:28.501 "enable_placement_id": 0, 00:05:28.501 "enable_zerocopy_send_server": true, 00:05:28.501 "enable_zerocopy_send_client": false, 00:05:28.501 "zerocopy_threshold": 0, 00:05:28.501 "tls_version": 0, 00:05:28.502 "enable_ktls": false 00:05:28.502 } 00:05:28.502 } 00:05:28.502 ] 00:05:28.502 }, 00:05:28.502 { 00:05:28.502 "subsystem": "vmd", 00:05:28.502 "config": [] 00:05:28.502 }, 00:05:28.502 { 00:05:28.502 "subsystem": "accel", 00:05:28.502 "config": [ 00:05:28.502 { 00:05:28.502 "method": "accel_set_options", 00:05:28.502 "params": { 00:05:28.502 "small_cache_size": 128, 00:05:28.502 "large_cache_size": 16, 00:05:28.502 "task_count": 2048, 00:05:28.502 "sequence_count": 2048, 00:05:28.502 "buf_count": 2048 00:05:28.502 } 00:05:28.502 } 00:05:28.502 ] 00:05:28.502 }, 00:05:28.502 { 00:05:28.502 "subsystem": "bdev", 00:05:28.502 "config": [ 00:05:28.502 { 00:05:28.502 "method": "bdev_set_options", 00:05:28.502 "params": { 00:05:28.502 "bdev_io_pool_size": 65535, 00:05:28.502 "bdev_io_cache_size": 256, 00:05:28.502 "bdev_auto_examine": true, 00:05:28.502 "iobuf_small_cache_size": 128, 00:05:28.502 "iobuf_large_cache_size": 16 00:05:28.502 } 00:05:28.502 }, 00:05:28.502 { 00:05:28.502 "method": "bdev_raid_set_options", 00:05:28.502 "params": { 00:05:28.502 "process_window_size_kb": 1024, 00:05:28.502 "process_max_bandwidth_mb_sec": 0 00:05:28.502 } 00:05:28.502 }, 00:05:28.502 { 00:05:28.502 "method": "bdev_iscsi_set_options", 00:05:28.502 "params": { 00:05:28.502 "timeout_sec": 30 00:05:28.502 } 00:05:28.502 }, 00:05:28.502 { 00:05:28.502 "method": "bdev_nvme_set_options", 00:05:28.502 "params": { 00:05:28.502 "action_on_timeout": "none", 00:05:28.502 "timeout_us": 0, 00:05:28.502 "timeout_admin_us": 0, 00:05:28.502 "keep_alive_timeout_ms": 10000, 00:05:28.502 "arbitration_burst": 0, 00:05:28.502 "low_priority_weight": 0, 00:05:28.502 "medium_priority_weight": 0, 00:05:28.502 "high_priority_weight": 0, 00:05:28.502 "nvme_adminq_poll_period_us": 10000, 00:05:28.502 "nvme_ioq_poll_period_us": 0, 00:05:28.502 "io_queue_requests": 0, 00:05:28.502 "delay_cmd_submit": true, 00:05:28.502 "transport_retry_count": 4, 00:05:28.502 "bdev_retry_count": 3, 00:05:28.502 "transport_ack_timeout": 0, 00:05:28.502 "ctrlr_loss_timeout_sec": 0, 00:05:28.502 "reconnect_delay_sec": 0, 00:05:28.502 "fast_io_fail_timeout_sec": 0, 00:05:28.502 "disable_auto_failback": false, 00:05:28.502 "generate_uuids": false, 00:05:28.502 "transport_tos": 0, 00:05:28.502 "nvme_error_stat": false, 00:05:28.502 "rdma_srq_size": 0, 00:05:28.502 "io_path_stat": false, 00:05:28.502 "allow_accel_sequence": false, 00:05:28.502 "rdma_max_cq_size": 0, 00:05:28.502 "rdma_cm_event_timeout_ms": 0, 00:05:28.502 "dhchap_digests": [ 00:05:28.502 "sha256", 00:05:28.502 "sha384", 00:05:28.502 "sha512" 00:05:28.502 ], 00:05:28.502 "dhchap_dhgroups": [ 00:05:28.502 "null", 00:05:28.502 "ffdhe2048", 00:05:28.502 "ffdhe3072", 00:05:28.502 "ffdhe4096", 00:05:28.502 "ffdhe6144", 00:05:28.502 "ffdhe8192" 00:05:28.502 ], 00:05:28.502 "rdma_umr_per_io": false 00:05:28.502 } 00:05:28.502 }, 00:05:28.502 { 00:05:28.502 "method": "bdev_nvme_set_hotplug", 00:05:28.502 "params": { 00:05:28.502 "period_us": 100000, 00:05:28.502 "enable": false 00:05:28.502 } 00:05:28.502 }, 00:05:28.502 { 00:05:28.502 "method": "bdev_wait_for_examine" 00:05:28.502 } 00:05:28.502 ] 00:05:28.502 }, 00:05:28.502 { 00:05:28.502 "subsystem": "scsi", 00:05:28.502 "config": null 00:05:28.502 }, 00:05:28.502 { 00:05:28.502 "subsystem": "scheduler", 00:05:28.502 "config": [ 00:05:28.502 { 00:05:28.502 "method": "framework_set_scheduler", 00:05:28.502 "params": { 00:05:28.502 "name": "static" 00:05:28.502 } 00:05:28.502 } 00:05:28.502 ] 00:05:28.502 }, 00:05:28.502 { 00:05:28.502 "subsystem": "vhost_scsi", 00:05:28.502 "config": [] 00:05:28.502 }, 00:05:28.502 { 00:05:28.502 "subsystem": "vhost_blk", 00:05:28.502 "config": [] 00:05:28.502 }, 00:05:28.502 { 00:05:28.502 "subsystem": "ublk", 00:05:28.502 "config": [] 00:05:28.502 }, 00:05:28.502 { 00:05:28.502 "subsystem": "nbd", 00:05:28.502 "config": [] 00:05:28.502 }, 00:05:28.502 { 00:05:28.502 "subsystem": "nvmf", 00:05:28.502 "config": [ 00:05:28.502 { 00:05:28.502 "method": "nvmf_set_config", 00:05:28.502 "params": { 00:05:28.502 "discovery_filter": "match_any", 00:05:28.502 "admin_cmd_passthru": { 00:05:28.502 "identify_ctrlr": false 00:05:28.502 }, 00:05:28.502 "dhchap_digests": [ 00:05:28.502 "sha256", 00:05:28.502 "sha384", 00:05:28.502 "sha512" 00:05:28.502 ], 00:05:28.502 "dhchap_dhgroups": [ 00:05:28.502 "null", 00:05:28.502 "ffdhe2048", 00:05:28.502 "ffdhe3072", 00:05:28.502 "ffdhe4096", 00:05:28.502 "ffdhe6144", 00:05:28.502 "ffdhe8192" 00:05:28.502 ] 00:05:28.502 } 00:05:28.502 }, 00:05:28.502 { 00:05:28.502 "method": "nvmf_set_max_subsystems", 00:05:28.502 "params": { 00:05:28.502 "max_subsystems": 1024 00:05:28.502 } 00:05:28.502 }, 00:05:28.502 { 00:05:28.502 "method": "nvmf_set_crdt", 00:05:28.502 "params": { 00:05:28.502 "crdt1": 0, 00:05:28.502 "crdt2": 0, 00:05:28.502 "crdt3": 0 00:05:28.502 } 00:05:28.502 }, 00:05:28.502 { 00:05:28.502 "method": "nvmf_create_transport", 00:05:28.502 "params": { 00:05:28.502 "trtype": "TCP", 00:05:28.502 "max_queue_depth": 128, 00:05:28.502 "max_io_qpairs_per_ctrlr": 127, 00:05:28.502 "in_capsule_data_size": 4096, 00:05:28.502 "max_io_size": 131072, 00:05:28.502 "io_unit_size": 131072, 00:05:28.502 "max_aq_depth": 128, 00:05:28.502 "num_shared_buffers": 511, 00:05:28.502 "buf_cache_size": 4294967295, 00:05:28.502 "dif_insert_or_strip": false, 00:05:28.502 "zcopy": false, 00:05:28.502 "c2h_success": true, 00:05:28.502 "sock_priority": 0, 00:05:28.502 "abort_timeout_sec": 1, 00:05:28.502 "ack_timeout": 0, 00:05:28.502 "data_wr_pool_size": 0 00:05:28.502 } 00:05:28.502 } 00:05:28.502 ] 00:05:28.502 }, 00:05:28.502 { 00:05:28.502 "subsystem": "iscsi", 00:05:28.502 "config": [ 00:05:28.502 { 00:05:28.502 "method": "iscsi_set_options", 00:05:28.502 "params": { 00:05:28.502 "node_base": "iqn.2016-06.io.spdk", 00:05:28.502 "max_sessions": 128, 00:05:28.502 "max_connections_per_session": 2, 00:05:28.502 "max_queue_depth": 64, 00:05:28.502 "default_time2wait": 2, 00:05:28.502 "default_time2retain": 20, 00:05:28.502 "first_burst_length": 8192, 00:05:28.502 "immediate_data": true, 00:05:28.502 "allow_duplicated_isid": false, 00:05:28.502 "error_recovery_level": 0, 00:05:28.502 "nop_timeout": 60, 00:05:28.502 "nop_in_interval": 30, 00:05:28.502 "disable_chap": false, 00:05:28.502 "require_chap": false, 00:05:28.502 "mutual_chap": false, 00:05:28.502 "chap_group": 0, 00:05:28.502 "max_large_datain_per_connection": 64, 00:05:28.502 "max_r2t_per_connection": 4, 00:05:28.502 "pdu_pool_size": 36864, 00:05:28.502 "immediate_data_pool_size": 16384, 00:05:28.502 "data_out_pool_size": 2048 00:05:28.502 } 00:05:28.502 } 00:05:28.502 ] 00:05:28.502 } 00:05:28.502 ] 00:05:28.502 } 00:05:28.502 01:05:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:28.502 01:05:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71113 00:05:28.502 01:05:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71113 ']' 00:05:28.502 01:05:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71113 00:05:28.502 01:05:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:28.502 01:05:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.502 01:05:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71113 00:05:28.502 01:05:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.502 killing process with pid 71113 00:05:28.502 01:05:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.502 01:05:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71113' 00:05:28.502 01:05:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71113 00:05:28.502 01:05:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71113 00:05:28.763 01:05:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71136 00:05:28.763 01:05:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:28.763 01:05:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:34.063 01:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71136 00:05:34.063 01:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71136 ']' 00:05:34.063 01:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71136 00:05:34.063 01:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:34.063 01:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.063 01:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71136 00:05:34.063 01:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.063 killing process with pid 71136 00:05:34.063 01:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.064 01:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71136' 00:05:34.064 01:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71136 00:05:34.064 01:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71136 00:05:34.064 01:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:34.064 01:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:34.064 00:05:34.064 real 0m6.540s 00:05:34.064 user 0m6.217s 00:05:34.064 sys 0m0.502s 00:05:34.064 01:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.064 01:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.064 ************************************ 00:05:34.064 END TEST skip_rpc_with_json 00:05:34.064 ************************************ 00:05:34.064 01:05:07 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:34.064 01:05:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.064 01:05:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.064 01:05:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.064 ************************************ 00:05:34.064 START TEST skip_rpc_with_delay 00:05:34.064 ************************************ 00:05:34.064 01:05:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:34.064 01:05:07 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:34.064 01:05:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:34.064 01:05:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:34.064 01:05:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.064 01:05:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.064 01:05:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.064 01:05:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.064 01:05:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.064 01:05:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.064 01:05:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.064 01:05:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:34.064 01:05:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:34.064 [2024-12-14 01:05:07.638651] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:34.323 01:05:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:34.323 01:05:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.323 01:05:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:34.323 01:05:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.323 00:05:34.323 real 0m0.118s 00:05:34.323 user 0m0.062s 00:05:34.323 sys 0m0.055s 00:05:34.323 01:05:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.323 01:05:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:34.323 ************************************ 00:05:34.323 END TEST skip_rpc_with_delay 00:05:34.323 ************************************ 00:05:34.323 01:05:07 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:34.323 01:05:07 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:34.323 01:05:07 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:34.323 01:05:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.323 01:05:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.323 01:05:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.323 ************************************ 00:05:34.323 START TEST exit_on_failed_rpc_init 00:05:34.323 ************************************ 00:05:34.323 01:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:34.323 01:05:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71248 00:05:34.323 01:05:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71248 00:05:34.323 01:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 71248 ']' 00:05:34.323 01:05:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.323 01:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.323 01:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.323 01:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.323 01:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.323 01:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:34.323 [2024-12-14 01:05:07.791279] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:34.323 [2024-12-14 01:05:07.791394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71248 ] 00:05:34.583 [2024-12-14 01:05:07.932766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.583 [2024-12-14 01:05:07.950064] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.151 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.152 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:35.152 01:05:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.152 01:05:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:35.152 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:35.152 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:35.152 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:35.152 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.152 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:35.152 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.152 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:35.152 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.152 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:35.152 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:35.152 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:35.152 [2024-12-14 01:05:08.696471] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:35.152 [2024-12-14 01:05:08.696586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71260 ] 00:05:35.411 [2024-12-14 01:05:08.841900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.411 [2024-12-14 01:05:08.860514] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.411 [2024-12-14 01:05:08.860592] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:35.411 [2024-12-14 01:05:08.860608] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:35.411 [2024-12-14 01:05:08.860616] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:35.411 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:35.411 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:35.411 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:35.411 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:35.411 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:35.411 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:35.411 01:05:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:35.411 01:05:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71248 00:05:35.411 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 71248 ']' 00:05:35.411 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 71248 00:05:35.411 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:35.411 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.411 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71248 00:05:35.411 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.411 killing process with pid 71248 00:05:35.411 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.411 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71248' 00:05:35.411 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 71248 00:05:35.411 01:05:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 71248 00:05:35.671 00:05:35.671 real 0m1.458s 00:05:35.671 user 0m1.623s 00:05:35.671 sys 0m0.335s 00:05:35.671 01:05:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.671 01:05:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:35.671 ************************************ 00:05:35.671 END TEST exit_on_failed_rpc_init 00:05:35.671 ************************************ 00:05:35.671 01:05:09 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:35.671 00:05:35.671 real 0m13.671s 00:05:35.671 user 0m12.977s 00:05:35.671 sys 0m1.270s 00:05:35.671 01:05:09 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.671 01:05:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.671 ************************************ 00:05:35.671 END TEST skip_rpc 00:05:35.671 ************************************ 00:05:35.671 01:05:09 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:35.671 01:05:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.671 01:05:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.671 01:05:09 -- common/autotest_common.sh@10 -- # set +x 00:05:35.671 ************************************ 00:05:35.671 START TEST rpc_client 00:05:35.671 ************************************ 00:05:35.671 01:05:09 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:35.931 * Looking for test storage... 00:05:35.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:35.931 01:05:09 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:35.931 01:05:09 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:35.931 01:05:09 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:35.931 01:05:09 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.931 01:05:09 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:35.931 01:05:09 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.931 01:05:09 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:35.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.931 --rc genhtml_branch_coverage=1 00:05:35.931 --rc genhtml_function_coverage=1 00:05:35.931 --rc genhtml_legend=1 00:05:35.931 --rc geninfo_all_blocks=1 00:05:35.931 --rc geninfo_unexecuted_blocks=1 00:05:35.931 00:05:35.931 ' 00:05:35.931 01:05:09 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:35.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.931 --rc genhtml_branch_coverage=1 00:05:35.931 --rc genhtml_function_coverage=1 00:05:35.931 --rc genhtml_legend=1 00:05:35.931 --rc geninfo_all_blocks=1 00:05:35.931 --rc geninfo_unexecuted_blocks=1 00:05:35.931 00:05:35.931 ' 00:05:35.931 01:05:09 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:35.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.931 --rc genhtml_branch_coverage=1 00:05:35.931 --rc genhtml_function_coverage=1 00:05:35.931 --rc genhtml_legend=1 00:05:35.931 --rc geninfo_all_blocks=1 00:05:35.931 --rc geninfo_unexecuted_blocks=1 00:05:35.931 00:05:35.931 ' 00:05:35.931 01:05:09 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:35.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.931 --rc genhtml_branch_coverage=1 00:05:35.931 --rc genhtml_function_coverage=1 00:05:35.931 --rc genhtml_legend=1 00:05:35.931 --rc geninfo_all_blocks=1 00:05:35.931 --rc geninfo_unexecuted_blocks=1 00:05:35.931 00:05:35.931 ' 00:05:35.931 01:05:09 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:35.931 OK 00:05:35.931 01:05:09 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:35.931 00:05:35.931 real 0m0.176s 00:05:35.931 user 0m0.107s 00:05:35.931 sys 0m0.080s 00:05:35.931 01:05:09 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.931 01:05:09 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:35.931 ************************************ 00:05:35.931 END TEST rpc_client 00:05:35.931 ************************************ 00:05:35.931 01:05:09 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:35.931 01:05:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.931 01:05:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.931 01:05:09 -- common/autotest_common.sh@10 -- # set +x 00:05:35.931 ************************************ 00:05:35.931 START TEST json_config 00:05:35.931 ************************************ 00:05:35.931 01:05:09 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:35.931 01:05:09 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:35.931 01:05:09 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:35.931 01:05:09 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:36.192 01:05:09 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:36.192 01:05:09 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.192 01:05:09 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.192 01:05:09 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.192 01:05:09 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.192 01:05:09 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.192 01:05:09 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.192 01:05:09 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.192 01:05:09 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.192 01:05:09 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.192 01:05:09 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.192 01:05:09 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.192 01:05:09 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:36.192 01:05:09 json_config -- scripts/common.sh@345 -- # : 1 00:05:36.192 01:05:09 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.192 01:05:09 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.192 01:05:09 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:36.192 01:05:09 json_config -- scripts/common.sh@353 -- # local d=1 00:05:36.192 01:05:09 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.192 01:05:09 json_config -- scripts/common.sh@355 -- # echo 1 00:05:36.192 01:05:09 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.192 01:05:09 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:36.192 01:05:09 json_config -- scripts/common.sh@353 -- # local d=2 00:05:36.192 01:05:09 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.192 01:05:09 json_config -- scripts/common.sh@355 -- # echo 2 00:05:36.192 01:05:09 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.192 01:05:09 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.192 01:05:09 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.192 01:05:09 json_config -- scripts/common.sh@368 -- # return 0 00:05:36.192 01:05:09 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.192 01:05:09 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:36.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.192 --rc genhtml_branch_coverage=1 00:05:36.192 --rc genhtml_function_coverage=1 00:05:36.192 --rc genhtml_legend=1 00:05:36.192 --rc geninfo_all_blocks=1 00:05:36.192 --rc geninfo_unexecuted_blocks=1 00:05:36.192 00:05:36.192 ' 00:05:36.192 01:05:09 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:36.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.192 --rc genhtml_branch_coverage=1 00:05:36.192 --rc genhtml_function_coverage=1 00:05:36.192 --rc genhtml_legend=1 00:05:36.192 --rc geninfo_all_blocks=1 00:05:36.192 --rc geninfo_unexecuted_blocks=1 00:05:36.192 00:05:36.192 ' 00:05:36.192 01:05:09 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:36.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.192 --rc genhtml_branch_coverage=1 00:05:36.192 --rc genhtml_function_coverage=1 00:05:36.192 --rc genhtml_legend=1 00:05:36.192 --rc geninfo_all_blocks=1 00:05:36.192 --rc geninfo_unexecuted_blocks=1 00:05:36.192 00:05:36.192 ' 00:05:36.192 01:05:09 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:36.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.192 --rc genhtml_branch_coverage=1 00:05:36.192 --rc genhtml_function_coverage=1 00:05:36.192 --rc genhtml_legend=1 00:05:36.192 --rc geninfo_all_blocks=1 00:05:36.192 --rc geninfo_unexecuted_blocks=1 00:05:36.192 00:05:36.192 ' 00:05:36.192 01:05:09 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:36.192 01:05:09 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:36.192 01:05:09 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.192 01:05:09 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.192 01:05:09 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.192 01:05:09 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.192 01:05:09 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.192 01:05:09 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.192 01:05:09 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.192 01:05:09 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.192 01:05:09 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.192 01:05:09 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.192 01:05:09 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4400a911-6c1d-4816-8e54-c91ba5397ac7 00:05:36.192 01:05:09 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=4400a911-6c1d-4816-8e54-c91ba5397ac7 00:05:36.192 01:05:09 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.192 01:05:09 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.192 01:05:09 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:36.192 01:05:09 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:36.192 01:05:09 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:36.192 01:05:09 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:36.193 01:05:09 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.193 01:05:09 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.193 01:05:09 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.193 01:05:09 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.193 01:05:09 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.193 01:05:09 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.193 01:05:09 json_config -- paths/export.sh@5 -- # export PATH 00:05:36.193 01:05:09 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.193 01:05:09 json_config -- nvmf/common.sh@51 -- # : 0 00:05:36.193 01:05:09 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:36.193 01:05:09 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:36.193 01:05:09 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:36.193 01:05:09 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.193 01:05:09 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.193 01:05:09 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:36.193 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:36.193 01:05:09 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:36.193 01:05:09 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:36.193 01:05:09 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:36.193 01:05:09 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:36.193 01:05:09 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:36.193 01:05:09 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:36.193 01:05:09 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:36.193 WARNING: No tests are enabled so not running JSON configuration tests 00:05:36.193 01:05:09 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:36.193 01:05:09 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:36.193 01:05:09 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:36.193 00:05:36.193 real 0m0.139s 00:05:36.193 user 0m0.083s 00:05:36.193 sys 0m0.059s 00:05:36.193 01:05:09 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.193 01:05:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.193 ************************************ 00:05:36.193 END TEST json_config 00:05:36.193 ************************************ 00:05:36.193 01:05:09 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:36.193 01:05:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.193 01:05:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.193 01:05:09 -- common/autotest_common.sh@10 -- # set +x 00:05:36.193 ************************************ 00:05:36.193 START TEST json_config_extra_key 00:05:36.193 ************************************ 00:05:36.193 01:05:09 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:36.193 01:05:09 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:36.193 01:05:09 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:36.193 01:05:09 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:36.193 01:05:09 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:36.193 01:05:09 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.193 01:05:09 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:36.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.193 --rc genhtml_branch_coverage=1 00:05:36.193 --rc genhtml_function_coverage=1 00:05:36.193 --rc genhtml_legend=1 00:05:36.193 --rc geninfo_all_blocks=1 00:05:36.193 --rc geninfo_unexecuted_blocks=1 00:05:36.193 00:05:36.193 ' 00:05:36.193 01:05:09 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:36.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.193 --rc genhtml_branch_coverage=1 00:05:36.193 --rc genhtml_function_coverage=1 00:05:36.193 --rc genhtml_legend=1 00:05:36.193 --rc geninfo_all_blocks=1 00:05:36.193 --rc geninfo_unexecuted_blocks=1 00:05:36.193 00:05:36.193 ' 00:05:36.193 01:05:09 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:36.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.193 --rc genhtml_branch_coverage=1 00:05:36.193 --rc genhtml_function_coverage=1 00:05:36.193 --rc genhtml_legend=1 00:05:36.193 --rc geninfo_all_blocks=1 00:05:36.193 --rc geninfo_unexecuted_blocks=1 00:05:36.193 00:05:36.193 ' 00:05:36.193 01:05:09 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:36.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.193 --rc genhtml_branch_coverage=1 00:05:36.193 --rc genhtml_function_coverage=1 00:05:36.193 --rc genhtml_legend=1 00:05:36.193 --rc geninfo_all_blocks=1 00:05:36.193 --rc geninfo_unexecuted_blocks=1 00:05:36.193 00:05:36.193 ' 00:05:36.193 01:05:09 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:36.193 01:05:09 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:36.193 01:05:09 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.193 01:05:09 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.193 01:05:09 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.193 01:05:09 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.193 01:05:09 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.193 01:05:09 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.193 01:05:09 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.193 01:05:09 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.193 01:05:09 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.193 01:05:09 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.193 01:05:09 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4400a911-6c1d-4816-8e54-c91ba5397ac7 00:05:36.193 01:05:09 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=4400a911-6c1d-4816-8e54-c91ba5397ac7 00:05:36.193 01:05:09 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.193 01:05:09 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.193 01:05:09 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:36.193 01:05:09 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:36.193 01:05:09 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.193 01:05:09 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.193 01:05:09 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.194 01:05:09 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.194 01:05:09 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.194 01:05:09 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:36.194 01:05:09 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.194 01:05:09 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:36.194 01:05:09 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:36.194 01:05:09 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:36.194 01:05:09 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:36.194 01:05:09 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.194 01:05:09 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.194 01:05:09 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:36.194 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:36.194 01:05:09 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:36.194 01:05:09 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:36.194 01:05:09 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:36.194 01:05:09 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:36.194 01:05:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:36.194 01:05:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:36.194 01:05:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:36.194 01:05:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:36.194 01:05:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:36.194 01:05:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:36.194 01:05:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:36.194 01:05:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:36.194 01:05:09 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:36.194 INFO: launching applications... 00:05:36.194 01:05:09 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:36.194 01:05:09 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:36.194 01:05:09 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:36.194 01:05:09 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:36.194 01:05:09 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:36.194 01:05:09 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:36.194 01:05:09 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:36.194 01:05:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.194 01:05:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.194 01:05:09 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71443 00:05:36.194 Waiting for target to run... 00:05:36.194 01:05:09 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:36.194 01:05:09 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71443 /var/tmp/spdk_tgt.sock 00:05:36.194 01:05:09 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 71443 ']' 00:05:36.194 01:05:09 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.194 01:05:09 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.194 01:05:09 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.194 01:05:09 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.194 01:05:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:36.194 01:05:09 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:36.455 [2024-12-14 01:05:09.832294] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:36.455 [2024-12-14 01:05:09.832414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71443 ] 00:05:36.715 [2024-12-14 01:05:10.132055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.715 [2024-12-14 01:05:10.148173] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.288 01:05:10 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.288 00:05:37.288 01:05:10 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:37.288 01:05:10 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:37.288 INFO: shutting down applications... 00:05:37.288 01:05:10 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:37.288 01:05:10 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:37.288 01:05:10 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:37.288 01:05:10 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:37.288 01:05:10 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71443 ]] 00:05:37.288 01:05:10 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71443 00:05:37.288 01:05:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:37.288 01:05:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.288 01:05:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71443 00:05:37.288 01:05:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.860 01:05:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.861 SPDK target shutdown done 00:05:37.861 Success 00:05:37.861 ************************************ 00:05:37.861 END TEST json_config_extra_key 00:05:37.861 ************************************ 00:05:37.861 01:05:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.861 01:05:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71443 00:05:37.861 01:05:11 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:37.861 01:05:11 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:37.861 01:05:11 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:37.861 01:05:11 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:37.861 01:05:11 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:37.861 00:05:37.861 real 0m1.542s 00:05:37.861 user 0m1.256s 00:05:37.861 sys 0m0.334s 00:05:37.861 01:05:11 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.861 01:05:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:37.861 01:05:11 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:37.861 01:05:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.861 01:05:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.861 01:05:11 -- common/autotest_common.sh@10 -- # set +x 00:05:37.861 ************************************ 00:05:37.861 START TEST alias_rpc 00:05:37.861 ************************************ 00:05:37.861 01:05:11 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:37.861 * Looking for test storage... 00:05:37.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:37.861 01:05:11 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:37.861 01:05:11 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:37.861 01:05:11 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:37.861 01:05:11 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.861 01:05:11 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:37.861 01:05:11 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.861 01:05:11 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:37.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.861 --rc genhtml_branch_coverage=1 00:05:37.861 --rc genhtml_function_coverage=1 00:05:37.861 --rc genhtml_legend=1 00:05:37.861 --rc geninfo_all_blocks=1 00:05:37.861 --rc geninfo_unexecuted_blocks=1 00:05:37.861 00:05:37.861 ' 00:05:37.861 01:05:11 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:37.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.861 --rc genhtml_branch_coverage=1 00:05:37.861 --rc genhtml_function_coverage=1 00:05:37.861 --rc genhtml_legend=1 00:05:37.861 --rc geninfo_all_blocks=1 00:05:37.861 --rc geninfo_unexecuted_blocks=1 00:05:37.861 00:05:37.861 ' 00:05:37.861 01:05:11 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:37.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.861 --rc genhtml_branch_coverage=1 00:05:37.861 --rc genhtml_function_coverage=1 00:05:37.861 --rc genhtml_legend=1 00:05:37.861 --rc geninfo_all_blocks=1 00:05:37.861 --rc geninfo_unexecuted_blocks=1 00:05:37.861 00:05:37.861 ' 00:05:37.861 01:05:11 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:37.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.861 --rc genhtml_branch_coverage=1 00:05:37.861 --rc genhtml_function_coverage=1 00:05:37.861 --rc genhtml_legend=1 00:05:37.861 --rc geninfo_all_blocks=1 00:05:37.861 --rc geninfo_unexecuted_blocks=1 00:05:37.861 00:05:37.861 ' 00:05:37.861 01:05:11 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:37.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.861 01:05:11 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71516 00:05:37.861 01:05:11 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71516 00:05:37.861 01:05:11 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 71516 ']' 00:05:37.861 01:05:11 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.861 01:05:11 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.861 01:05:11 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.861 01:05:11 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.861 01:05:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.861 01:05:11 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.120 [2024-12-14 01:05:11.471078] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:38.120 [2024-12-14 01:05:11.471230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71516 ] 00:05:38.120 [2024-12-14 01:05:11.612177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.120 [2024-12-14 01:05:11.632987] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.054 01:05:12 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.054 01:05:12 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:39.054 01:05:12 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:39.054 01:05:12 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71516 00:05:39.054 01:05:12 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 71516 ']' 00:05:39.054 01:05:12 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 71516 00:05:39.054 01:05:12 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:39.054 01:05:12 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.054 01:05:12 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71516 00:05:39.054 01:05:12 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.054 killing process with pid 71516 00:05:39.054 01:05:12 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.054 01:05:12 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71516' 00:05:39.054 01:05:12 alias_rpc -- common/autotest_common.sh@973 -- # kill 71516 00:05:39.054 01:05:12 alias_rpc -- common/autotest_common.sh@978 -- # wait 71516 00:05:39.314 00:05:39.314 real 0m1.561s 00:05:39.314 user 0m1.686s 00:05:39.314 sys 0m0.394s 00:05:39.314 01:05:12 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.314 01:05:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.314 ************************************ 00:05:39.314 END TEST alias_rpc 00:05:39.314 ************************************ 00:05:39.314 01:05:12 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:39.314 01:05:12 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:39.314 01:05:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.314 01:05:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.314 01:05:12 -- common/autotest_common.sh@10 -- # set +x 00:05:39.314 ************************************ 00:05:39.314 START TEST spdkcli_tcp 00:05:39.314 ************************************ 00:05:39.314 01:05:12 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:39.314 * Looking for test storage... 00:05:39.314 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:39.314 01:05:12 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:39.314 01:05:12 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:39.314 01:05:12 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:39.575 01:05:12 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.575 01:05:12 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:39.575 01:05:12 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.575 01:05:12 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:39.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.575 --rc genhtml_branch_coverage=1 00:05:39.575 --rc genhtml_function_coverage=1 00:05:39.575 --rc genhtml_legend=1 00:05:39.575 --rc geninfo_all_blocks=1 00:05:39.575 --rc geninfo_unexecuted_blocks=1 00:05:39.575 00:05:39.575 ' 00:05:39.575 01:05:12 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:39.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.575 --rc genhtml_branch_coverage=1 00:05:39.575 --rc genhtml_function_coverage=1 00:05:39.575 --rc genhtml_legend=1 00:05:39.575 --rc geninfo_all_blocks=1 00:05:39.575 --rc geninfo_unexecuted_blocks=1 00:05:39.575 00:05:39.575 ' 00:05:39.575 01:05:12 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:39.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.575 --rc genhtml_branch_coverage=1 00:05:39.575 --rc genhtml_function_coverage=1 00:05:39.575 --rc genhtml_legend=1 00:05:39.575 --rc geninfo_all_blocks=1 00:05:39.575 --rc geninfo_unexecuted_blocks=1 00:05:39.575 00:05:39.575 ' 00:05:39.575 01:05:12 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:39.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.576 --rc genhtml_branch_coverage=1 00:05:39.576 --rc genhtml_function_coverage=1 00:05:39.576 --rc genhtml_legend=1 00:05:39.576 --rc geninfo_all_blocks=1 00:05:39.576 --rc geninfo_unexecuted_blocks=1 00:05:39.576 00:05:39.576 ' 00:05:39.576 01:05:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:39.576 01:05:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:39.576 01:05:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:39.576 01:05:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:39.576 01:05:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:39.576 01:05:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:39.576 01:05:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:39.576 01:05:12 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.576 01:05:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.576 01:05:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71596 00:05:39.576 01:05:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71596 00:05:39.576 01:05:12 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 71596 ']' 00:05:39.576 01:05:12 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.576 01:05:12 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.576 01:05:12 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.576 01:05:12 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.576 01:05:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:39.576 01:05:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.576 [2024-12-14 01:05:13.045149] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:39.576 [2024-12-14 01:05:13.045264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71596 ] 00:05:39.838 [2024-12-14 01:05:13.190180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.838 [2024-12-14 01:05:13.210585] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.838 [2024-12-14 01:05:13.210615] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.408 01:05:13 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.408 01:05:13 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:40.408 01:05:13 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71613 00:05:40.408 01:05:13 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:40.408 01:05:13 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:40.671 [ 00:05:40.671 "bdev_malloc_delete", 00:05:40.671 "bdev_malloc_create", 00:05:40.671 "bdev_null_resize", 00:05:40.671 "bdev_null_delete", 00:05:40.671 "bdev_null_create", 00:05:40.671 "bdev_nvme_cuse_unregister", 00:05:40.671 "bdev_nvme_cuse_register", 00:05:40.671 "bdev_opal_new_user", 00:05:40.671 "bdev_opal_set_lock_state", 00:05:40.671 "bdev_opal_delete", 00:05:40.671 "bdev_opal_get_info", 00:05:40.671 "bdev_opal_create", 00:05:40.671 "bdev_nvme_opal_revert", 00:05:40.671 "bdev_nvme_opal_init", 00:05:40.671 "bdev_nvme_send_cmd", 00:05:40.671 "bdev_nvme_set_keys", 00:05:40.671 "bdev_nvme_get_path_iostat", 00:05:40.671 "bdev_nvme_get_mdns_discovery_info", 00:05:40.671 "bdev_nvme_stop_mdns_discovery", 00:05:40.671 "bdev_nvme_start_mdns_discovery", 00:05:40.671 "bdev_nvme_set_multipath_policy", 00:05:40.671 "bdev_nvme_set_preferred_path", 00:05:40.671 "bdev_nvme_get_io_paths", 00:05:40.671 "bdev_nvme_remove_error_injection", 00:05:40.671 "bdev_nvme_add_error_injection", 00:05:40.671 "bdev_nvme_get_discovery_info", 00:05:40.671 "bdev_nvme_stop_discovery", 00:05:40.671 "bdev_nvme_start_discovery", 00:05:40.671 "bdev_nvme_get_controller_health_info", 00:05:40.671 "bdev_nvme_disable_controller", 00:05:40.671 "bdev_nvme_enable_controller", 00:05:40.671 "bdev_nvme_reset_controller", 00:05:40.671 "bdev_nvme_get_transport_statistics", 00:05:40.671 "bdev_nvme_apply_firmware", 00:05:40.671 "bdev_nvme_detach_controller", 00:05:40.671 "bdev_nvme_get_controllers", 00:05:40.671 "bdev_nvme_attach_controller", 00:05:40.671 "bdev_nvme_set_hotplug", 00:05:40.671 "bdev_nvme_set_options", 00:05:40.671 "bdev_passthru_delete", 00:05:40.671 "bdev_passthru_create", 00:05:40.671 "bdev_lvol_set_parent_bdev", 00:05:40.671 "bdev_lvol_set_parent", 00:05:40.671 "bdev_lvol_check_shallow_copy", 00:05:40.671 "bdev_lvol_start_shallow_copy", 00:05:40.671 "bdev_lvol_grow_lvstore", 00:05:40.671 "bdev_lvol_get_lvols", 00:05:40.671 "bdev_lvol_get_lvstores", 00:05:40.671 "bdev_lvol_delete", 00:05:40.671 "bdev_lvol_set_read_only", 00:05:40.671 "bdev_lvol_resize", 00:05:40.671 "bdev_lvol_decouple_parent", 00:05:40.671 "bdev_lvol_inflate", 00:05:40.671 "bdev_lvol_rename", 00:05:40.671 "bdev_lvol_clone_bdev", 00:05:40.671 "bdev_lvol_clone", 00:05:40.671 "bdev_lvol_snapshot", 00:05:40.671 "bdev_lvol_create", 00:05:40.671 "bdev_lvol_delete_lvstore", 00:05:40.671 "bdev_lvol_rename_lvstore", 00:05:40.671 "bdev_lvol_create_lvstore", 00:05:40.671 "bdev_raid_set_options", 00:05:40.671 "bdev_raid_remove_base_bdev", 00:05:40.671 "bdev_raid_add_base_bdev", 00:05:40.671 "bdev_raid_delete", 00:05:40.671 "bdev_raid_create", 00:05:40.671 "bdev_raid_get_bdevs", 00:05:40.671 "bdev_error_inject_error", 00:05:40.671 "bdev_error_delete", 00:05:40.671 "bdev_error_create", 00:05:40.671 "bdev_split_delete", 00:05:40.671 "bdev_split_create", 00:05:40.671 "bdev_delay_delete", 00:05:40.671 "bdev_delay_create", 00:05:40.671 "bdev_delay_update_latency", 00:05:40.671 "bdev_zone_block_delete", 00:05:40.671 "bdev_zone_block_create", 00:05:40.671 "blobfs_create", 00:05:40.671 "blobfs_detect", 00:05:40.671 "blobfs_set_cache_size", 00:05:40.671 "bdev_xnvme_delete", 00:05:40.671 "bdev_xnvme_create", 00:05:40.671 "bdev_aio_delete", 00:05:40.671 "bdev_aio_rescan", 00:05:40.671 "bdev_aio_create", 00:05:40.671 "bdev_ftl_set_property", 00:05:40.671 "bdev_ftl_get_properties", 00:05:40.671 "bdev_ftl_get_stats", 00:05:40.671 "bdev_ftl_unmap", 00:05:40.671 "bdev_ftl_unload", 00:05:40.671 "bdev_ftl_delete", 00:05:40.671 "bdev_ftl_load", 00:05:40.671 "bdev_ftl_create", 00:05:40.671 "bdev_virtio_attach_controller", 00:05:40.671 "bdev_virtio_scsi_get_devices", 00:05:40.671 "bdev_virtio_detach_controller", 00:05:40.671 "bdev_virtio_blk_set_hotplug", 00:05:40.671 "bdev_iscsi_delete", 00:05:40.671 "bdev_iscsi_create", 00:05:40.671 "bdev_iscsi_set_options", 00:05:40.671 "accel_error_inject_error", 00:05:40.671 "ioat_scan_accel_module", 00:05:40.671 "dsa_scan_accel_module", 00:05:40.671 "iaa_scan_accel_module", 00:05:40.671 "keyring_file_remove_key", 00:05:40.671 "keyring_file_add_key", 00:05:40.671 "keyring_linux_set_options", 00:05:40.671 "fsdev_aio_delete", 00:05:40.671 "fsdev_aio_create", 00:05:40.671 "iscsi_get_histogram", 00:05:40.671 "iscsi_enable_histogram", 00:05:40.671 "iscsi_set_options", 00:05:40.671 "iscsi_get_auth_groups", 00:05:40.671 "iscsi_auth_group_remove_secret", 00:05:40.671 "iscsi_auth_group_add_secret", 00:05:40.671 "iscsi_delete_auth_group", 00:05:40.671 "iscsi_create_auth_group", 00:05:40.671 "iscsi_set_discovery_auth", 00:05:40.671 "iscsi_get_options", 00:05:40.671 "iscsi_target_node_request_logout", 00:05:40.671 "iscsi_target_node_set_redirect", 00:05:40.671 "iscsi_target_node_set_auth", 00:05:40.671 "iscsi_target_node_add_lun", 00:05:40.671 "iscsi_get_stats", 00:05:40.671 "iscsi_get_connections", 00:05:40.671 "iscsi_portal_group_set_auth", 00:05:40.671 "iscsi_start_portal_group", 00:05:40.671 "iscsi_delete_portal_group", 00:05:40.671 "iscsi_create_portal_group", 00:05:40.671 "iscsi_get_portal_groups", 00:05:40.671 "iscsi_delete_target_node", 00:05:40.671 "iscsi_target_node_remove_pg_ig_maps", 00:05:40.671 "iscsi_target_node_add_pg_ig_maps", 00:05:40.671 "iscsi_create_target_node", 00:05:40.671 "iscsi_get_target_nodes", 00:05:40.671 "iscsi_delete_initiator_group", 00:05:40.671 "iscsi_initiator_group_remove_initiators", 00:05:40.671 "iscsi_initiator_group_add_initiators", 00:05:40.671 "iscsi_create_initiator_group", 00:05:40.671 "iscsi_get_initiator_groups", 00:05:40.671 "nvmf_set_crdt", 00:05:40.671 "nvmf_set_config", 00:05:40.671 "nvmf_set_max_subsystems", 00:05:40.671 "nvmf_stop_mdns_prr", 00:05:40.671 "nvmf_publish_mdns_prr", 00:05:40.671 "nvmf_subsystem_get_listeners", 00:05:40.671 "nvmf_subsystem_get_qpairs", 00:05:40.671 "nvmf_subsystem_get_controllers", 00:05:40.671 "nvmf_get_stats", 00:05:40.671 "nvmf_get_transports", 00:05:40.671 "nvmf_create_transport", 00:05:40.671 "nvmf_get_targets", 00:05:40.671 "nvmf_delete_target", 00:05:40.671 "nvmf_create_target", 00:05:40.671 "nvmf_subsystem_allow_any_host", 00:05:40.671 "nvmf_subsystem_set_keys", 00:05:40.671 "nvmf_subsystem_remove_host", 00:05:40.671 "nvmf_subsystem_add_host", 00:05:40.671 "nvmf_ns_remove_host", 00:05:40.671 "nvmf_ns_add_host", 00:05:40.671 "nvmf_subsystem_remove_ns", 00:05:40.671 "nvmf_subsystem_set_ns_ana_group", 00:05:40.671 "nvmf_subsystem_add_ns", 00:05:40.671 "nvmf_subsystem_listener_set_ana_state", 00:05:40.671 "nvmf_discovery_get_referrals", 00:05:40.671 "nvmf_discovery_remove_referral", 00:05:40.671 "nvmf_discovery_add_referral", 00:05:40.671 "nvmf_subsystem_remove_listener", 00:05:40.671 "nvmf_subsystem_add_listener", 00:05:40.671 "nvmf_delete_subsystem", 00:05:40.671 "nvmf_create_subsystem", 00:05:40.671 "nvmf_get_subsystems", 00:05:40.671 "env_dpdk_get_mem_stats", 00:05:40.671 "nbd_get_disks", 00:05:40.671 "nbd_stop_disk", 00:05:40.671 "nbd_start_disk", 00:05:40.671 "ublk_recover_disk", 00:05:40.671 "ublk_get_disks", 00:05:40.671 "ublk_stop_disk", 00:05:40.671 "ublk_start_disk", 00:05:40.671 "ublk_destroy_target", 00:05:40.671 "ublk_create_target", 00:05:40.671 "virtio_blk_create_transport", 00:05:40.671 "virtio_blk_get_transports", 00:05:40.671 "vhost_controller_set_coalescing", 00:05:40.671 "vhost_get_controllers", 00:05:40.671 "vhost_delete_controller", 00:05:40.671 "vhost_create_blk_controller", 00:05:40.671 "vhost_scsi_controller_remove_target", 00:05:40.672 "vhost_scsi_controller_add_target", 00:05:40.672 "vhost_start_scsi_controller", 00:05:40.672 "vhost_create_scsi_controller", 00:05:40.672 "thread_set_cpumask", 00:05:40.672 "scheduler_set_options", 00:05:40.672 "framework_get_governor", 00:05:40.672 "framework_get_scheduler", 00:05:40.672 "framework_set_scheduler", 00:05:40.672 "framework_get_reactors", 00:05:40.672 "thread_get_io_channels", 00:05:40.672 "thread_get_pollers", 00:05:40.672 "thread_get_stats", 00:05:40.672 "framework_monitor_context_switch", 00:05:40.672 "spdk_kill_instance", 00:05:40.672 "log_enable_timestamps", 00:05:40.672 "log_get_flags", 00:05:40.672 "log_clear_flag", 00:05:40.672 "log_set_flag", 00:05:40.672 "log_get_level", 00:05:40.672 "log_set_level", 00:05:40.672 "log_get_print_level", 00:05:40.672 "log_set_print_level", 00:05:40.672 "framework_enable_cpumask_locks", 00:05:40.672 "framework_disable_cpumask_locks", 00:05:40.672 "framework_wait_init", 00:05:40.672 "framework_start_init", 00:05:40.672 "scsi_get_devices", 00:05:40.672 "bdev_get_histogram", 00:05:40.672 "bdev_enable_histogram", 00:05:40.672 "bdev_set_qos_limit", 00:05:40.672 "bdev_set_qd_sampling_period", 00:05:40.672 "bdev_get_bdevs", 00:05:40.672 "bdev_reset_iostat", 00:05:40.672 "bdev_get_iostat", 00:05:40.672 "bdev_examine", 00:05:40.672 "bdev_wait_for_examine", 00:05:40.672 "bdev_set_options", 00:05:40.672 "accel_get_stats", 00:05:40.672 "accel_set_options", 00:05:40.672 "accel_set_driver", 00:05:40.672 "accel_crypto_key_destroy", 00:05:40.672 "accel_crypto_keys_get", 00:05:40.672 "accel_crypto_key_create", 00:05:40.672 "accel_assign_opc", 00:05:40.672 "accel_get_module_info", 00:05:40.672 "accel_get_opc_assignments", 00:05:40.672 "vmd_rescan", 00:05:40.672 "vmd_remove_device", 00:05:40.672 "vmd_enable", 00:05:40.672 "sock_get_default_impl", 00:05:40.672 "sock_set_default_impl", 00:05:40.672 "sock_impl_set_options", 00:05:40.672 "sock_impl_get_options", 00:05:40.672 "iobuf_get_stats", 00:05:40.672 "iobuf_set_options", 00:05:40.672 "keyring_get_keys", 00:05:40.672 "framework_get_pci_devices", 00:05:40.672 "framework_get_config", 00:05:40.672 "framework_get_subsystems", 00:05:40.672 "fsdev_set_opts", 00:05:40.672 "fsdev_get_opts", 00:05:40.672 "trace_get_info", 00:05:40.672 "trace_get_tpoint_group_mask", 00:05:40.672 "trace_disable_tpoint_group", 00:05:40.672 "trace_enable_tpoint_group", 00:05:40.672 "trace_clear_tpoint_mask", 00:05:40.672 "trace_set_tpoint_mask", 00:05:40.672 "notify_get_notifications", 00:05:40.672 "notify_get_types", 00:05:40.672 "spdk_get_version", 00:05:40.672 "rpc_get_methods" 00:05:40.672 ] 00:05:40.672 01:05:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:40.672 01:05:14 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:40.672 01:05:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:40.672 01:05:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:40.672 01:05:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71596 00:05:40.672 01:05:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 71596 ']' 00:05:40.672 01:05:14 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 71596 00:05:40.672 01:05:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:40.672 01:05:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.672 01:05:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71596 00:05:40.672 01:05:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.672 killing process with pid 71596 00:05:40.672 01:05:14 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.672 01:05:14 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71596' 00:05:40.672 01:05:14 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 71596 00:05:40.672 01:05:14 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 71596 00:05:40.932 ************************************ 00:05:40.932 END TEST spdkcli_tcp 00:05:40.932 ************************************ 00:05:40.932 00:05:40.932 real 0m1.560s 00:05:40.932 user 0m2.834s 00:05:40.932 sys 0m0.378s 00:05:40.932 01:05:14 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.932 01:05:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:40.932 01:05:14 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:40.932 01:05:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.932 01:05:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.932 01:05:14 -- common/autotest_common.sh@10 -- # set +x 00:05:40.932 ************************************ 00:05:40.932 START TEST dpdk_mem_utility 00:05:40.932 ************************************ 00:05:40.932 01:05:14 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:40.932 * Looking for test storage... 00:05:40.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:40.932 01:05:14 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:40.932 01:05:14 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:40.932 01:05:14 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:41.193 01:05:14 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.193 01:05:14 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:41.193 01:05:14 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.193 01:05:14 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:41.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.193 --rc genhtml_branch_coverage=1 00:05:41.193 --rc genhtml_function_coverage=1 00:05:41.193 --rc genhtml_legend=1 00:05:41.193 --rc geninfo_all_blocks=1 00:05:41.193 --rc geninfo_unexecuted_blocks=1 00:05:41.193 00:05:41.193 ' 00:05:41.193 01:05:14 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:41.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.193 --rc genhtml_branch_coverage=1 00:05:41.193 --rc genhtml_function_coverage=1 00:05:41.193 --rc genhtml_legend=1 00:05:41.193 --rc geninfo_all_blocks=1 00:05:41.193 --rc geninfo_unexecuted_blocks=1 00:05:41.193 00:05:41.193 ' 00:05:41.193 01:05:14 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:41.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.193 --rc genhtml_branch_coverage=1 00:05:41.193 --rc genhtml_function_coverage=1 00:05:41.193 --rc genhtml_legend=1 00:05:41.193 --rc geninfo_all_blocks=1 00:05:41.193 --rc geninfo_unexecuted_blocks=1 00:05:41.193 00:05:41.193 ' 00:05:41.193 01:05:14 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:41.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.193 --rc genhtml_branch_coverage=1 00:05:41.193 --rc genhtml_function_coverage=1 00:05:41.193 --rc genhtml_legend=1 00:05:41.193 --rc geninfo_all_blocks=1 00:05:41.193 --rc geninfo_unexecuted_blocks=1 00:05:41.193 00:05:41.193 ' 00:05:41.193 01:05:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:41.193 01:05:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71690 00:05:41.193 01:05:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71690 00:05:41.193 01:05:14 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 71690 ']' 00:05:41.194 01:05:14 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.194 01:05:14 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.194 01:05:14 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.194 01:05:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.194 01:05:14 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.194 01:05:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:41.194 [2024-12-14 01:05:14.639466] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:41.194 [2024-12-14 01:05:14.639574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71690 ] 00:05:41.194 [2024-12-14 01:05:14.784103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.455 [2024-12-14 01:05:14.802740] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.031 01:05:15 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.031 01:05:15 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:42.031 01:05:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:42.031 01:05:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:42.031 01:05:15 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.031 01:05:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:42.031 { 00:05:42.031 "filename": "/tmp/spdk_mem_dump.txt" 00:05:42.031 } 00:05:42.031 01:05:15 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.031 01:05:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:42.031 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:42.031 1 heaps totaling size 818.000000 MiB 00:05:42.031 size: 818.000000 MiB heap id: 0 00:05:42.031 end heaps---------- 00:05:42.031 9 mempools totaling size 603.782043 MiB 00:05:42.031 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:42.031 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:42.031 size: 100.555481 MiB name: bdev_io_71690 00:05:42.031 size: 50.003479 MiB name: msgpool_71690 00:05:42.031 size: 36.509338 MiB name: fsdev_io_71690 00:05:42.031 size: 21.763794 MiB name: PDU_Pool 00:05:42.031 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:42.031 size: 4.133484 MiB name: evtpool_71690 00:05:42.031 size: 0.026123 MiB name: Session_Pool 00:05:42.031 end mempools------- 00:05:42.031 6 memzones totaling size 4.142822 MiB 00:05:42.031 size: 1.000366 MiB name: RG_ring_0_71690 00:05:42.031 size: 1.000366 MiB name: RG_ring_1_71690 00:05:42.031 size: 1.000366 MiB name: RG_ring_4_71690 00:05:42.031 size: 1.000366 MiB name: RG_ring_5_71690 00:05:42.031 size: 0.125366 MiB name: RG_ring_2_71690 00:05:42.031 size: 0.015991 MiB name: RG_ring_3_71690 00:05:42.031 end memzones------- 00:05:42.031 01:05:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:42.031 heap id: 0 total size: 818.000000 MiB number of busy elements: 315 number of free elements: 15 00:05:42.031 list of free elements. size: 10.802856 MiB 00:05:42.031 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:42.031 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:42.031 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:42.031 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:42.031 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:42.031 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:42.031 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:42.031 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:42.031 element at address: 0x20001ae00000 with size: 0.568054 MiB 00:05:42.031 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:42.031 element at address: 0x200000c00000 with size: 0.486267 MiB 00:05:42.031 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:42.031 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:42.031 element at address: 0x200028200000 with size: 0.395752 MiB 00:05:42.031 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:42.031 list of standard malloc elements. size: 199.268250 MiB 00:05:42.031 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:42.031 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:42.031 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:42.031 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:42.031 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:42.031 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:42.031 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:42.031 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:42.031 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:42.031 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:42.031 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:42.031 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:42.031 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:42.031 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:42.031 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:42.031 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:42.031 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:42.031 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:42.031 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:42.031 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:42.031 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:42.031 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:42.031 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:42.031 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:42.031 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:42.031 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:42.031 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:42.031 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:42.031 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:42.031 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:42.031 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:42.031 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:42.032 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:42.032 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:42.032 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:42.032 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:42.032 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:42.032 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:42.032 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:42.032 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:42.032 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:42.032 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:42.032 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:42.032 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:05:42.032 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:42.033 element at address: 0x200028265500 with size: 0.000183 MiB 00:05:42.033 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20002826c480 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20002826c540 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20002826c600 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20002826c780 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20002826c840 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20002826c900 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20002826d080 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20002826d140 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20002826d200 with size: 0.000183 MiB 00:05:42.033 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826d380 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826d440 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826d500 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826d680 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826d740 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826d800 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826d980 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826da40 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826db00 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826de00 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826df80 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826e040 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826e100 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826e280 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826e340 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826e400 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826e580 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826e640 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826e700 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826e880 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826e940 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826f000 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826f180 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826f240 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826f300 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826f480 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826f540 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826f600 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826f780 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826f840 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826f900 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:42.034 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:42.034 list of memzone associated elements. size: 607.928894 MiB 00:05:42.034 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:42.034 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:42.034 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:42.034 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:42.034 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:42.034 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_71690_0 00:05:42.034 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:42.034 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71690_0 00:05:42.034 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:42.034 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71690_0 00:05:42.034 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:42.034 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:42.034 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:42.034 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:42.034 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:42.034 associated memzone info: size: 3.000122 MiB name: MP_evtpool_71690_0 00:05:42.034 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:42.034 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71690 00:05:42.034 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:42.034 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71690 00:05:42.034 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:42.034 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:42.034 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:42.034 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:42.034 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:42.034 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:42.034 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:42.034 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:42.034 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:42.034 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71690 00:05:42.034 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:42.034 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71690 00:05:42.034 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:42.034 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71690 00:05:42.034 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:42.034 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71690 00:05:42.034 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:42.034 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71690 00:05:42.034 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:42.034 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71690 00:05:42.034 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:42.034 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:42.035 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:42.035 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:42.035 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:42.035 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:42.035 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:42.035 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_71690 00:05:42.035 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:42.035 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71690 00:05:42.035 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:42.035 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:42.035 element at address: 0x200028265680 with size: 0.023743 MiB 00:05:42.035 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:42.035 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:42.035 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71690 00:05:42.035 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:05:42.035 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:42.035 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:42.035 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71690 00:05:42.035 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:42.035 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71690 00:05:42.035 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:42.035 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71690 00:05:42.035 element at address: 0x20002826c280 with size: 0.000305 MiB 00:05:42.035 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:42.035 01:05:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:42.035 01:05:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71690 00:05:42.035 01:05:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 71690 ']' 00:05:42.035 01:05:15 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 71690 00:05:42.035 01:05:15 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:42.035 01:05:15 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.035 01:05:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71690 00:05:42.035 01:05:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.035 01:05:15 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.035 01:05:15 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71690' 00:05:42.035 killing process with pid 71690 00:05:42.035 01:05:15 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 71690 00:05:42.035 01:05:15 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 71690 00:05:42.297 00:05:42.297 real 0m1.440s 00:05:42.297 user 0m1.493s 00:05:42.297 sys 0m0.358s 00:05:42.297 ************************************ 00:05:42.297 END TEST dpdk_mem_utility 00:05:42.297 ************************************ 00:05:42.297 01:05:15 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.297 01:05:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:42.559 01:05:15 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:42.559 01:05:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.559 01:05:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.559 01:05:15 -- common/autotest_common.sh@10 -- # set +x 00:05:42.559 ************************************ 00:05:42.559 START TEST event 00:05:42.559 ************************************ 00:05:42.559 01:05:15 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:42.559 * Looking for test storage... 00:05:42.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:42.559 01:05:16 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:42.559 01:05:16 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:42.559 01:05:16 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:42.559 01:05:16 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:42.559 01:05:16 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.559 01:05:16 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.559 01:05:16 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.559 01:05:16 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.559 01:05:16 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.559 01:05:16 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.559 01:05:16 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.559 01:05:16 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.559 01:05:16 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.559 01:05:16 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.559 01:05:16 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.559 01:05:16 event -- scripts/common.sh@344 -- # case "$op" in 00:05:42.559 01:05:16 event -- scripts/common.sh@345 -- # : 1 00:05:42.559 01:05:16 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.559 01:05:16 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.559 01:05:16 event -- scripts/common.sh@365 -- # decimal 1 00:05:42.559 01:05:16 event -- scripts/common.sh@353 -- # local d=1 00:05:42.559 01:05:16 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.559 01:05:16 event -- scripts/common.sh@355 -- # echo 1 00:05:42.559 01:05:16 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.559 01:05:16 event -- scripts/common.sh@366 -- # decimal 2 00:05:42.559 01:05:16 event -- scripts/common.sh@353 -- # local d=2 00:05:42.559 01:05:16 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.559 01:05:16 event -- scripts/common.sh@355 -- # echo 2 00:05:42.559 01:05:16 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.559 01:05:16 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.559 01:05:16 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.559 01:05:16 event -- scripts/common.sh@368 -- # return 0 00:05:42.559 01:05:16 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.559 01:05:16 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:42.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.559 --rc genhtml_branch_coverage=1 00:05:42.559 --rc genhtml_function_coverage=1 00:05:42.559 --rc genhtml_legend=1 00:05:42.559 --rc geninfo_all_blocks=1 00:05:42.559 --rc geninfo_unexecuted_blocks=1 00:05:42.559 00:05:42.559 ' 00:05:42.559 01:05:16 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:42.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.559 --rc genhtml_branch_coverage=1 00:05:42.559 --rc genhtml_function_coverage=1 00:05:42.559 --rc genhtml_legend=1 00:05:42.559 --rc geninfo_all_blocks=1 00:05:42.559 --rc geninfo_unexecuted_blocks=1 00:05:42.559 00:05:42.559 ' 00:05:42.559 01:05:16 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:42.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.559 --rc genhtml_branch_coverage=1 00:05:42.559 --rc genhtml_function_coverage=1 00:05:42.559 --rc genhtml_legend=1 00:05:42.559 --rc geninfo_all_blocks=1 00:05:42.559 --rc geninfo_unexecuted_blocks=1 00:05:42.559 00:05:42.559 ' 00:05:42.559 01:05:16 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:42.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.559 --rc genhtml_branch_coverage=1 00:05:42.559 --rc genhtml_function_coverage=1 00:05:42.559 --rc genhtml_legend=1 00:05:42.559 --rc geninfo_all_blocks=1 00:05:42.559 --rc geninfo_unexecuted_blocks=1 00:05:42.559 00:05:42.559 ' 00:05:42.559 01:05:16 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:42.559 01:05:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:42.559 01:05:16 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.559 01:05:16 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:42.559 01:05:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.559 01:05:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.559 ************************************ 00:05:42.559 START TEST event_perf 00:05:42.559 ************************************ 00:05:42.559 01:05:16 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.559 Running I/O for 1 seconds...[2024-12-14 01:05:16.123609] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:42.559 [2024-12-14 01:05:16.123764] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71771 ] 00:05:42.820 [2024-12-14 01:05:16.273272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:42.820 [2024-12-14 01:05:16.293880] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.820 Running I/O for 1 seconds...[2024-12-14 01:05:16.294258] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.820 [2024-12-14 01:05:16.294403] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.820 [2024-12-14 01:05:16.294501] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.761 00:05:43.761 lcore 0: 177256 00:05:43.761 lcore 1: 177253 00:05:43.761 lcore 2: 177256 00:05:43.761 lcore 3: 177255 00:05:43.761 done. 00:05:43.761 00:05:43.761 real 0m1.263s 00:05:43.761 user 0m4.058s 00:05:43.761 sys 0m0.084s 00:05:43.761 01:05:17 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.761 01:05:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:43.761 ************************************ 00:05:43.761 END TEST event_perf 00:05:43.761 ************************************ 00:05:44.022 01:05:17 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:44.022 01:05:17 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:44.022 01:05:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.022 01:05:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.022 ************************************ 00:05:44.022 START TEST event_reactor 00:05:44.022 ************************************ 00:05:44.022 01:05:17 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:44.022 [2024-12-14 01:05:17.445169] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:44.022 [2024-12-14 01:05:17.445310] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71810 ] 00:05:44.022 [2024-12-14 01:05:17.593000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.022 [2024-12-14 01:05:17.621788] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.410 test_start 00:05:45.410 oneshot 00:05:45.410 tick 100 00:05:45.410 tick 100 00:05:45.410 tick 250 00:05:45.410 tick 100 00:05:45.410 tick 100 00:05:45.410 tick 100 00:05:45.410 tick 250 00:05:45.410 tick 500 00:05:45.410 tick 100 00:05:45.410 tick 100 00:05:45.410 tick 250 00:05:45.410 tick 100 00:05:45.410 tick 100 00:05:45.410 test_end 00:05:45.410 00:05:45.410 real 0m1.261s 00:05:45.410 user 0m1.082s 00:05:45.410 sys 0m0.068s 00:05:45.410 01:05:18 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.410 ************************************ 00:05:45.410 END TEST event_reactor 00:05:45.410 ************************************ 00:05:45.410 01:05:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:45.410 01:05:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:45.410 01:05:18 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:45.410 01:05:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.410 01:05:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.410 ************************************ 00:05:45.410 START TEST event_reactor_perf 00:05:45.410 ************************************ 00:05:45.410 01:05:18 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:45.410 [2024-12-14 01:05:18.774064] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:45.410 [2024-12-14 01:05:18.774219] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71841 ] 00:05:45.410 [2024-12-14 01:05:18.918990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.410 [2024-12-14 01:05:18.942501] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.793 test_start 00:05:46.793 test_end 00:05:46.793 Performance: 311946 events per second 00:05:46.793 00:05:46.793 real 0m1.240s 00:05:46.793 user 0m1.072s 00:05:46.793 sys 0m0.061s 00:05:46.793 ************************************ 00:05:46.793 END TEST event_reactor_perf 00:05:46.793 ************************************ 00:05:46.793 01:05:19 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.793 01:05:19 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.793 01:05:20 event -- event/event.sh@49 -- # uname -s 00:05:46.793 01:05:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:46.793 01:05:20 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:46.793 01:05:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.793 01:05:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.793 01:05:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.793 ************************************ 00:05:46.793 START TEST event_scheduler 00:05:46.793 ************************************ 00:05:46.793 01:05:20 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:46.793 * Looking for test storage... 00:05:46.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:46.793 01:05:20 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:46.793 01:05:20 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:46.793 01:05:20 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:46.793 01:05:20 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.793 01:05:20 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:46.793 01:05:20 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.793 01:05:20 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:46.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.793 --rc genhtml_branch_coverage=1 00:05:46.793 --rc genhtml_function_coverage=1 00:05:46.793 --rc genhtml_legend=1 00:05:46.793 --rc geninfo_all_blocks=1 00:05:46.793 --rc geninfo_unexecuted_blocks=1 00:05:46.793 00:05:46.793 ' 00:05:46.793 01:05:20 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:46.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.793 --rc genhtml_branch_coverage=1 00:05:46.793 --rc genhtml_function_coverage=1 00:05:46.793 --rc genhtml_legend=1 00:05:46.793 --rc geninfo_all_blocks=1 00:05:46.793 --rc geninfo_unexecuted_blocks=1 00:05:46.793 00:05:46.793 ' 00:05:46.793 01:05:20 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:46.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.793 --rc genhtml_branch_coverage=1 00:05:46.793 --rc genhtml_function_coverage=1 00:05:46.793 --rc genhtml_legend=1 00:05:46.793 --rc geninfo_all_blocks=1 00:05:46.793 --rc geninfo_unexecuted_blocks=1 00:05:46.793 00:05:46.793 ' 00:05:46.793 01:05:20 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:46.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.793 --rc genhtml_branch_coverage=1 00:05:46.793 --rc genhtml_function_coverage=1 00:05:46.793 --rc genhtml_legend=1 00:05:46.793 --rc geninfo_all_blocks=1 00:05:46.793 --rc geninfo_unexecuted_blocks=1 00:05:46.793 00:05:46.793 ' 00:05:46.793 01:05:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:46.793 01:05:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71912 00:05:46.793 01:05:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.793 01:05:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71912 00:05:46.793 01:05:20 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 71912 ']' 00:05:46.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.793 01:05:20 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.793 01:05:20 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.793 01:05:20 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.793 01:05:20 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.793 01:05:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.793 01:05:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:46.793 [2024-12-14 01:05:20.273516] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:46.793 [2024-12-14 01:05:20.273683] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71912 ] 00:05:47.055 [2024-12-14 01:05:20.422250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:47.055 [2024-12-14 01:05:20.454448] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.055 [2024-12-14 01:05:20.454799] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.055 [2024-12-14 01:05:20.454922] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.055 [2024-12-14 01:05:20.455008] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.629 01:05:21 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.629 01:05:21 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:47.629 01:05:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:47.629 01:05:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.629 01:05:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.630 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:47.630 POWER: Cannot set governor of lcore 0 to userspace 00:05:47.630 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:47.630 POWER: Cannot set governor of lcore 0 to performance 00:05:47.630 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:47.630 POWER: Cannot set governor of lcore 0 to userspace 00:05:47.630 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:47.630 POWER: Unable to set Power Management Environment for lcore 0 00:05:47.630 [2024-12-14 01:05:21.144913] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:47.630 [2024-12-14 01:05:21.144939] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:47.630 [2024-12-14 01:05:21.144976] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:47.630 [2024-12-14 01:05:21.144997] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:47.630 [2024-12-14 01:05:21.145006] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:47.630 [2024-12-14 01:05:21.145030] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:47.630 01:05:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.630 01:05:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:47.630 01:05:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.630 01:05:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.630 [2024-12-14 01:05:21.237409] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:47.891 01:05:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.891 01:05:21 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:47.891 01:05:21 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.891 01:05:21 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.891 01:05:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.891 ************************************ 00:05:47.891 START TEST scheduler_create_thread 00:05:47.891 ************************************ 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.891 2 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.891 3 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.891 4 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.891 5 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.891 6 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.891 7 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.891 8 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.891 9 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.891 10 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.891 01:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.400 01:05:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.400 01:05:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:49.400 01:05:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:49.400 01:05:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.400 01:05:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.342 01:05:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.342 00:05:50.342 real 0m2.613s 00:05:50.342 user 0m0.014s 00:05:50.342 sys 0m0.009s 00:05:50.342 01:05:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.342 01:05:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.342 ************************************ 00:05:50.342 END TEST scheduler_create_thread 00:05:50.342 ************************************ 00:05:50.342 01:05:23 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:50.342 01:05:23 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71912 00:05:50.342 01:05:23 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 71912 ']' 00:05:50.342 01:05:23 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 71912 00:05:50.342 01:05:23 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:50.342 01:05:23 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.342 01:05:23 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71912 00:05:50.342 01:05:23 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:50.342 01:05:23 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:50.342 killing process with pid 71912 00:05:50.342 01:05:23 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71912' 00:05:50.342 01:05:23 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 71912 00:05:50.342 01:05:23 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 71912 00:05:50.909 [2024-12-14 01:05:24.349209] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:50.909 ************************************ 00:05:50.909 END TEST event_scheduler 00:05:50.909 ************************************ 00:05:50.909 00:05:50.909 real 0m4.432s 00:05:50.909 user 0m8.142s 00:05:50.909 sys 0m0.396s 00:05:50.909 01:05:24 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.910 01:05:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.168 01:05:24 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:51.168 01:05:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:51.168 01:05:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.168 01:05:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.168 01:05:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.168 ************************************ 00:05:51.168 START TEST app_repeat 00:05:51.168 ************************************ 00:05:51.168 01:05:24 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:51.168 01:05:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.168 01:05:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.168 01:05:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:51.168 01:05:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.168 01:05:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:51.168 01:05:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:51.168 01:05:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:51.168 01:05:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=72007 00:05:51.168 01:05:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.168 Process app_repeat pid: 72007 00:05:51.168 spdk_app_start Round 0 00:05:51.168 01:05:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 72007' 00:05:51.168 01:05:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:51.168 01:05:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:51.168 01:05:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72007 /var/tmp/spdk-nbd.sock 00:05:51.168 01:05:24 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:51.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.168 01:05:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72007 ']' 00:05:51.168 01:05:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.168 01:05:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.168 01:05:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.168 01:05:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.168 01:05:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.168 [2024-12-14 01:05:24.583300] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:51.168 [2024-12-14 01:05:24.583407] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72007 ] 00:05:51.168 [2024-12-14 01:05:24.729698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.168 [2024-12-14 01:05:24.749463] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.168 [2024-12-14 01:05:24.749501] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.101 01:05:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.101 01:05:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:52.101 01:05:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.101 Malloc0 00:05:52.101 01:05:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.360 Malloc1 00:05:52.360 01:05:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.360 01:05:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.360 01:05:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.360 01:05:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:52.360 01:05:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.360 01:05:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:52.360 01:05:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.360 01:05:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.360 01:05:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.360 01:05:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:52.360 01:05:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.360 01:05:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:52.360 01:05:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:52.360 01:05:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:52.360 01:05:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.360 01:05:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.618 /dev/nbd0 00:05:52.618 01:05:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.618 01:05:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.618 01:05:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:52.618 01:05:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:52.618 01:05:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:52.618 01:05:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:52.618 01:05:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:52.618 01:05:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:52.618 01:05:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:52.618 01:05:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:52.618 01:05:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.618 1+0 records in 00:05:52.618 1+0 records out 00:05:52.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208877 s, 19.6 MB/s 00:05:52.618 01:05:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.618 01:05:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:52.618 01:05:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.618 01:05:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:52.618 01:05:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:52.618 01:05:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.618 01:05:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.618 01:05:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.618 /dev/nbd1 00:05:52.877 01:05:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.877 01:05:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.877 01:05:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:52.877 01:05:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:52.877 01:05:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:52.877 01:05:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:52.877 01:05:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:52.877 01:05:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:52.877 01:05:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:52.877 01:05:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:52.877 01:05:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.877 1+0 records in 00:05:52.877 1+0 records out 00:05:52.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263802 s, 15.5 MB/s 00:05:52.877 01:05:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.877 01:05:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:52.877 01:05:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.877 01:05:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:52.877 01:05:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:52.877 01:05:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.877 01:05:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.877 01:05:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.877 01:05:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.877 01:05:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.877 01:05:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:52.877 { 00:05:52.877 "nbd_device": "/dev/nbd0", 00:05:52.877 "bdev_name": "Malloc0" 00:05:52.877 }, 00:05:52.877 { 00:05:52.877 "nbd_device": "/dev/nbd1", 00:05:52.877 "bdev_name": "Malloc1" 00:05:52.877 } 00:05:52.877 ]' 00:05:52.877 01:05:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:52.877 { 00:05:52.877 "nbd_device": "/dev/nbd0", 00:05:52.877 "bdev_name": "Malloc0" 00:05:52.877 }, 00:05:52.877 { 00:05:52.877 "nbd_device": "/dev/nbd1", 00:05:52.877 "bdev_name": "Malloc1" 00:05:52.877 } 00:05:52.877 ]' 00:05:52.877 01:05:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:53.136 /dev/nbd1' 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:53.136 /dev/nbd1' 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:53.136 256+0 records in 00:05:53.136 256+0 records out 00:05:53.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111669 s, 93.9 MB/s 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.136 256+0 records in 00:05:53.136 256+0 records out 00:05:53.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196007 s, 53.5 MB/s 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.136 256+0 records in 00:05:53.136 256+0 records out 00:05:53.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152172 s, 68.9 MB/s 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.136 01:05:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.137 01:05:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:53.137 01:05:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.137 01:05:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.394 01:05:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.394 01:05:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.394 01:05:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.394 01:05:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.394 01:05:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.394 01:05:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.395 01:05:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.395 01:05:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.395 01:05:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.395 01:05:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.395 01:05:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.395 01:05:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.395 01:05:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.395 01:05:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.395 01:05:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.395 01:05:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.395 01:05:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.395 01:05:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.395 01:05:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.395 01:05:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.395 01:05:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.653 01:05:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:53.653 01:05:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:53.653 01:05:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.653 01:05:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:53.653 01:05:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:53.653 01:05:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.653 01:05:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:53.653 01:05:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:53.653 01:05:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:53.653 01:05:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:53.653 01:05:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:53.653 01:05:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:53.653 01:05:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:53.911 01:05:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:54.172 [2024-12-14 01:05:27.533689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.172 [2024-12-14 01:05:27.551712] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.172 [2024-12-14 01:05:27.551930] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.172 [2024-12-14 01:05:27.584154] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.172 [2024-12-14 01:05:27.584209] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.553 01:05:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:57.553 spdk_app_start Round 1 00:05:57.553 01:05:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:57.553 01:05:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72007 /var/tmp/spdk-nbd.sock 00:05:57.553 01:05:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72007 ']' 00:05:57.553 01:05:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.553 01:05:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.553 01:05:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.553 01:05:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.553 01:05:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.553 01:05:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.553 01:05:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:57.553 01:05:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.553 Malloc0 00:05:57.553 01:05:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.553 Malloc1 00:05:57.553 01:05:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.553 01:05:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.553 01:05:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.553 01:05:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:57.553 01:05:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.553 01:05:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:57.553 01:05:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.553 01:05:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.553 01:05:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.553 01:05:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:57.554 01:05:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.554 01:05:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:57.554 01:05:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:57.554 01:05:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:57.554 01:05:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.554 01:05:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:57.815 /dev/nbd0 00:05:57.815 01:05:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:57.815 01:05:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:57.815 01:05:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:57.815 01:05:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:57.815 01:05:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.816 01:05:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.816 01:05:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:57.816 01:05:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:57.816 01:05:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.816 01:05:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.816 01:05:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.816 1+0 records in 00:05:57.816 1+0 records out 00:05:57.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290333 s, 14.1 MB/s 00:05:57.816 01:05:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.816 01:05:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:57.816 01:05:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.816 01:05:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.816 01:05:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:57.816 01:05:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.816 01:05:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.816 01:05:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.078 /dev/nbd1 00:05:58.078 01:05:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.078 01:05:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.078 01:05:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:58.078 01:05:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:58.078 01:05:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:58.078 01:05:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:58.078 01:05:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:58.078 01:05:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:58.078 01:05:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:58.078 01:05:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:58.078 01:05:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.078 1+0 records in 00:05:58.078 1+0 records out 00:05:58.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237276 s, 17.3 MB/s 00:05:58.078 01:05:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.078 01:05:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:58.078 01:05:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.078 01:05:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:58.078 01:05:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:58.078 01:05:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.078 01:05:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.078 01:05:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.078 01:05:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.078 01:05:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.340 { 00:05:58.340 "nbd_device": "/dev/nbd0", 00:05:58.340 "bdev_name": "Malloc0" 00:05:58.340 }, 00:05:58.340 { 00:05:58.340 "nbd_device": "/dev/nbd1", 00:05:58.340 "bdev_name": "Malloc1" 00:05:58.340 } 00:05:58.340 ]' 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.340 { 00:05:58.340 "nbd_device": "/dev/nbd0", 00:05:58.340 "bdev_name": "Malloc0" 00:05:58.340 }, 00:05:58.340 { 00:05:58.340 "nbd_device": "/dev/nbd1", 00:05:58.340 "bdev_name": "Malloc1" 00:05:58.340 } 00:05:58.340 ]' 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.340 /dev/nbd1' 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.340 /dev/nbd1' 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:58.340 256+0 records in 00:05:58.340 256+0 records out 00:05:58.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00743799 s, 141 MB/s 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.340 256+0 records in 00:05:58.340 256+0 records out 00:05:58.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135169 s, 77.6 MB/s 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.340 256+0 records in 00:05:58.340 256+0 records out 00:05:58.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177522 s, 59.1 MB/s 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.340 01:05:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:58.602 01:05:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:58.602 01:05:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:58.602 01:05:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:58.602 01:05:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.602 01:05:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.602 01:05:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:58.602 01:05:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.602 01:05:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.602 01:05:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.602 01:05:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:58.863 01:05:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:58.863 01:05:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:58.863 01:05:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:58.863 01:05:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.863 01:05:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.863 01:05:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:58.863 01:05:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.863 01:05:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.863 01:05:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.863 01:05:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.863 01:05:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.863 01:05:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:58.863 01:05:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:58.863 01:05:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.125 01:05:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.125 01:05:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.125 01:05:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.125 01:05:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:59.125 01:05:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.125 01:05:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.125 01:05:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.125 01:05:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.125 01:05:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.125 01:05:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:59.125 01:05:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:59.386 [2024-12-14 01:05:32.776206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.387 [2024-12-14 01:05:32.792374] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.387 [2024-12-14 01:05:32.792376] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.387 [2024-12-14 01:05:32.822131] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.387 [2024-12-14 01:05:32.822174] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:02.677 spdk_app_start Round 2 00:06:02.677 01:05:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:02.677 01:05:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:02.677 01:05:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72007 /var/tmp/spdk-nbd.sock 00:06:02.677 01:05:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72007 ']' 00:06:02.677 01:05:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.677 01:05:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.677 01:05:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.677 01:05:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.677 01:05:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.677 01:05:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.677 01:05:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:02.677 01:05:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.677 Malloc0 00:06:02.677 01:05:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.936 Malloc1 00:06:02.936 01:05:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.936 01:05:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.936 01:05:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.936 01:05:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:02.936 01:05:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.936 01:05:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:02.936 01:05:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.936 01:05:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.936 01:05:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.936 01:05:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:02.936 01:05:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.937 01:05:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:02.937 01:05:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:02.937 01:05:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:02.937 01:05:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.937 01:05:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.196 /dev/nbd0 00:06:03.196 01:05:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.196 01:05:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.196 01:05:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:03.196 01:05:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:03.196 01:05:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:03.196 01:05:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:03.196 01:05:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:03.196 01:05:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:03.196 01:05:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:03.196 01:05:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:03.196 01:05:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.196 1+0 records in 00:06:03.196 1+0 records out 00:06:03.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170525 s, 24.0 MB/s 00:06:03.196 01:05:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.196 01:05:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:03.196 01:05:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.196 01:05:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:03.196 01:05:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:03.196 01:05:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.196 01:05:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.196 01:05:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.196 /dev/nbd1 00:06:03.196 01:05:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.196 01:05:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.196 01:05:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:03.196 01:05:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:03.196 01:05:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:03.196 01:05:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:03.196 01:05:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:03.456 01:05:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:03.456 01:05:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:03.456 01:05:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:03.456 01:05:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.456 1+0 records in 00:06:03.456 1+0 records out 00:06:03.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244935 s, 16.7 MB/s 00:06:03.456 01:05:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.456 01:05:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:03.456 01:05:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.456 01:05:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:03.456 01:05:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:03.456 01:05:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.456 01:05:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.456 01:05:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.456 01:05:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.456 01:05:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.456 01:05:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.456 { 00:06:03.456 "nbd_device": "/dev/nbd0", 00:06:03.456 "bdev_name": "Malloc0" 00:06:03.456 }, 00:06:03.456 { 00:06:03.456 "nbd_device": "/dev/nbd1", 00:06:03.456 "bdev_name": "Malloc1" 00:06:03.456 } 00:06:03.456 ]' 00:06:03.456 01:05:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.456 { 00:06:03.456 "nbd_device": "/dev/nbd0", 00:06:03.456 "bdev_name": "Malloc0" 00:06:03.456 }, 00:06:03.456 { 00:06:03.456 "nbd_device": "/dev/nbd1", 00:06:03.456 "bdev_name": "Malloc1" 00:06:03.456 } 00:06:03.456 ]' 00:06:03.456 01:05:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.456 01:05:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:03.456 /dev/nbd1' 00:06:03.456 01:05:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:03.456 /dev/nbd1' 00:06:03.456 01:05:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.715 01:05:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:03.715 01:05:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:03.715 01:05:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:03.715 01:05:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:03.715 01:05:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:03.715 01:05:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.715 01:05:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.715 01:05:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:03.715 01:05:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.715 01:05:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:03.715 01:05:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:03.715 256+0 records in 00:06:03.715 256+0 records out 00:06:03.715 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00646499 s, 162 MB/s 00:06:03.715 01:05:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.716 01:05:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:03.716 256+0 records in 00:06:03.716 256+0 records out 00:06:03.716 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168209 s, 62.3 MB/s 00:06:03.716 01:05:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.716 01:05:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:03.716 256+0 records in 00:06:03.716 256+0 records out 00:06:03.716 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021193 s, 49.5 MB/s 00:06:03.716 01:05:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:03.716 01:05:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.716 01:05:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.716 01:05:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.716 01:05:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.716 01:05:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.716 01:05:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.716 01:05:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.716 01:05:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:03.716 01:05:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.716 01:05:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.716 01:05:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.716 01:05:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.716 01:05:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.716 01:05:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.716 01:05:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.716 01:05:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:03.716 01:05:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.716 01:05:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.975 01:05:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.975 01:05:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.975 01:05:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.975 01:05:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.975 01:05:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.975 01:05:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.975 01:05:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.975 01:05:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.975 01:05:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.975 01:05:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.975 01:05:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.975 01:05:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.975 01:05:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.975 01:05:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.975 01:05:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.975 01:05:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.975 01:05:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.975 01:05:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.975 01:05:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.975 01:05:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.975 01:05:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.235 01:05:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.235 01:05:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.235 01:05:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.235 01:05:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.235 01:05:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.235 01:05:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.235 01:05:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:04.235 01:05:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.235 01:05:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.235 01:05:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.235 01:05:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.235 01:05:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.235 01:05:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.493 01:05:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:04.493 [2024-12-14 01:05:38.101563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.751 [2024-12-14 01:05:38.119703] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.751 [2024-12-14 01:05:38.119715] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.751 [2024-12-14 01:05:38.152754] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.751 [2024-12-14 01:05:38.152807] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:08.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.061 01:05:41 event.app_repeat -- event/event.sh@38 -- # waitforlisten 72007 /var/tmp/spdk-nbd.sock 00:06:08.061 01:05:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72007 ']' 00:06:08.061 01:05:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.061 01:05:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.061 01:05:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.061 01:05:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.061 01:05:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.061 01:05:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.061 01:05:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:08.061 01:05:41 event.app_repeat -- event/event.sh@39 -- # killprocess 72007 00:06:08.061 01:05:41 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 72007 ']' 00:06:08.061 01:05:41 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 72007 00:06:08.061 01:05:41 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:08.061 01:05:41 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.061 01:05:41 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72007 00:06:08.061 killing process with pid 72007 00:06:08.061 01:05:41 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.061 01:05:41 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.061 01:05:41 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72007' 00:06:08.061 01:05:41 event.app_repeat -- common/autotest_common.sh@973 -- # kill 72007 00:06:08.061 01:05:41 event.app_repeat -- common/autotest_common.sh@978 -- # wait 72007 00:06:08.061 spdk_app_start is called in Round 0. 00:06:08.061 Shutdown signal received, stop current app iteration 00:06:08.061 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:08.061 spdk_app_start is called in Round 1. 00:06:08.061 Shutdown signal received, stop current app iteration 00:06:08.061 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:08.061 spdk_app_start is called in Round 2. 00:06:08.061 Shutdown signal received, stop current app iteration 00:06:08.061 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:08.061 spdk_app_start is called in Round 3. 00:06:08.061 Shutdown signal received, stop current app iteration 00:06:08.061 01:05:41 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:08.061 01:05:41 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:08.061 00:06:08.061 real 0m16.832s 00:06:08.061 user 0m37.624s 00:06:08.061 sys 0m2.089s 00:06:08.061 ************************************ 00:06:08.061 END TEST app_repeat 00:06:08.062 ************************************ 00:06:08.062 01:05:41 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.062 01:05:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.062 01:05:41 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:08.062 01:05:41 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:08.062 01:05:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.062 01:05:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.062 01:05:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.062 ************************************ 00:06:08.062 START TEST cpu_locks 00:06:08.062 ************************************ 00:06:08.062 01:05:41 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:08.062 * Looking for test storage... 00:06:08.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:08.062 01:05:41 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:08.062 01:05:41 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:08.062 01:05:41 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:08.062 01:05:41 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.062 01:05:41 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:08.062 01:05:41 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.062 01:05:41 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:08.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.062 --rc genhtml_branch_coverage=1 00:06:08.062 --rc genhtml_function_coverage=1 00:06:08.062 --rc genhtml_legend=1 00:06:08.062 --rc geninfo_all_blocks=1 00:06:08.062 --rc geninfo_unexecuted_blocks=1 00:06:08.062 00:06:08.062 ' 00:06:08.062 01:05:41 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:08.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.062 --rc genhtml_branch_coverage=1 00:06:08.062 --rc genhtml_function_coverage=1 00:06:08.062 --rc genhtml_legend=1 00:06:08.062 --rc geninfo_all_blocks=1 00:06:08.062 --rc geninfo_unexecuted_blocks=1 00:06:08.062 00:06:08.062 ' 00:06:08.062 01:05:41 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:08.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.062 --rc genhtml_branch_coverage=1 00:06:08.062 --rc genhtml_function_coverage=1 00:06:08.062 --rc genhtml_legend=1 00:06:08.062 --rc geninfo_all_blocks=1 00:06:08.062 --rc geninfo_unexecuted_blocks=1 00:06:08.062 00:06:08.062 ' 00:06:08.062 01:05:41 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:08.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.062 --rc genhtml_branch_coverage=1 00:06:08.062 --rc genhtml_function_coverage=1 00:06:08.062 --rc genhtml_legend=1 00:06:08.062 --rc geninfo_all_blocks=1 00:06:08.062 --rc geninfo_unexecuted_blocks=1 00:06:08.062 00:06:08.062 ' 00:06:08.062 01:05:41 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:08.062 01:05:41 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:08.062 01:05:41 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:08.062 01:05:41 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:08.062 01:05:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.062 01:05:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.062 01:05:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.062 ************************************ 00:06:08.062 START TEST default_locks 00:06:08.062 ************************************ 00:06:08.062 01:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:08.062 01:05:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72432 00:06:08.062 01:05:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72432 00:06:08.062 01:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72432 ']' 00:06:08.062 01:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.062 01:05:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.062 01:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.062 01:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.062 01:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.062 01:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.062 [2024-12-14 01:05:41.636281] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:08.062 [2024-12-14 01:05:41.636396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72432 ] 00:06:08.321 [2024-12-14 01:05:41.780918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.321 [2024-12-14 01:05:41.804729] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.887 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.887 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:08.887 01:05:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72432 00:06:08.887 01:05:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.887 01:05:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72432 00:06:09.146 01:05:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72432 00:06:09.146 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 72432 ']' 00:06:09.146 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 72432 00:06:09.146 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:09.146 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.146 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72432 00:06:09.146 killing process with pid 72432 00:06:09.146 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.146 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.146 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72432' 00:06:09.146 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 72432 00:06:09.146 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 72432 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72432 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72432 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:09.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 72432 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72432 ']' 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.405 ERROR: process (pid: 72432) is no longer running 00:06:09.405 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72432) - No such process 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:09.405 00:06:09.405 real 0m1.410s 00:06:09.405 user 0m1.446s 00:06:09.405 sys 0m0.420s 00:06:09.405 ************************************ 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.405 01:05:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.405 END TEST default_locks 00:06:09.405 ************************************ 00:06:09.405 01:05:43 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:09.405 01:05:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.405 01:05:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.405 01:05:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.664 ************************************ 00:06:09.664 START TEST default_locks_via_rpc 00:06:09.664 ************************************ 00:06:09.664 01:05:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:09.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.664 01:05:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72474 00:06:09.664 01:05:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72474 00:06:09.664 01:05:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72474 ']' 00:06:09.664 01:05:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.664 01:05:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.664 01:05:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.664 01:05:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.664 01:05:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.664 01:05:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.664 [2024-12-14 01:05:43.086350] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:09.664 [2024-12-14 01:05:43.086465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72474 ] 00:06:09.664 [2024-12-14 01:05:43.227161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.664 [2024-12-14 01:05:43.246102] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.599 01:05:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.599 01:05:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:10.599 01:05:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:10.599 01:05:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.599 01:05:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.599 01:05:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.599 01:05:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:10.599 01:05:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:10.599 01:05:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:10.599 01:05:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:10.599 01:05:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:10.599 01:05:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.599 01:05:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.599 01:05:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.599 01:05:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72474 00:06:10.599 01:05:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72474 00:06:10.599 01:05:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.599 01:05:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72474 00:06:10.599 01:05:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 72474 ']' 00:06:10.599 01:05:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 72474 00:06:10.599 01:05:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:10.599 01:05:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.599 01:05:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72474 00:06:10.599 killing process with pid 72474 00:06:10.599 01:05:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.599 01:05:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.599 01:05:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72474' 00:06:10.599 01:05:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 72474 00:06:10.599 01:05:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 72474 00:06:10.858 00:06:10.858 real 0m1.364s 00:06:10.858 user 0m1.423s 00:06:10.858 sys 0m0.381s 00:06:10.858 01:05:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.858 ************************************ 00:06:10.858 END TEST default_locks_via_rpc 00:06:10.858 ************************************ 00:06:10.858 01:05:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.858 01:05:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:10.858 01:05:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.858 01:05:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.858 01:05:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.858 ************************************ 00:06:10.858 START TEST non_locking_app_on_locked_coremask 00:06:10.858 ************************************ 00:06:10.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.858 01:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:10.858 01:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72526 00:06:10.858 01:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72526 /var/tmp/spdk.sock 00:06:10.858 01:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72526 ']' 00:06:10.858 01:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.858 01:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.858 01:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.858 01:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.858 01:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.858 01:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.115 [2024-12-14 01:05:44.508646] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:11.116 [2024-12-14 01:05:44.508760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72526 ] 00:06:11.116 [2024-12-14 01:05:44.654081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.116 [2024-12-14 01:05:44.673484] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.056 01:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.056 01:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:12.056 01:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:12.056 01:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72536 00:06:12.056 01:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72536 /var/tmp/spdk2.sock 00:06:12.056 01:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72536 ']' 00:06:12.056 01:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.056 01:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.056 01:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.056 01:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.056 01:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.056 [2024-12-14 01:05:45.400542] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:12.056 [2024-12-14 01:05:45.400812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72536 ] 00:06:12.056 [2024-12-14 01:05:45.554686] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.056 [2024-12-14 01:05:45.554739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.056 [2024-12-14 01:05:45.594824] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.628 01:05:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.628 01:05:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:12.628 01:05:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72526 00:06:12.628 01:05:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72526 00:06:12.628 01:05:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.201 01:05:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72526 00:06:13.201 01:05:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72526 ']' 00:06:13.201 01:05:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72526 00:06:13.201 01:05:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:13.201 01:05:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.201 01:05:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72526 00:06:13.201 01:05:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.201 01:05:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.201 01:05:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72526' 00:06:13.201 killing process with pid 72526 00:06:13.201 01:05:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72526 00:06:13.201 01:05:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72526 00:06:13.460 01:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72536 00:06:13.460 01:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72536 ']' 00:06:13.460 01:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72536 00:06:13.720 01:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:13.720 01:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.720 01:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72536 00:06:13.720 killing process with pid 72536 00:06:13.720 01:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.720 01:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.720 01:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72536' 00:06:13.720 01:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72536 00:06:13.720 01:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72536 00:06:13.981 ************************************ 00:06:13.981 END TEST non_locking_app_on_locked_coremask 00:06:13.981 ************************************ 00:06:13.981 00:06:13.981 real 0m2.906s 00:06:13.981 user 0m3.174s 00:06:13.981 sys 0m0.740s 00:06:13.981 01:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.981 01:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.981 01:05:47 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:13.981 01:05:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.982 01:05:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.982 01:05:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.982 ************************************ 00:06:13.982 START TEST locking_app_on_unlocked_coremask 00:06:13.982 ************************************ 00:06:13.982 01:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:13.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.982 01:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72594 00:06:13.982 01:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:13.982 01:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72594 /var/tmp/spdk.sock 00:06:13.982 01:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72594 ']' 00:06:13.982 01:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.982 01:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.982 01:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.982 01:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.982 01:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.982 [2024-12-14 01:05:47.466338] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:13.982 [2024-12-14 01:05:47.466617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72594 ] 00:06:14.239 [2024-12-14 01:05:47.612831] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.239 [2024-12-14 01:05:47.612883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.239 [2024-12-14 01:05:47.633498] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.804 01:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.805 01:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:14.805 01:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72605 00:06:14.805 01:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:14.805 01:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72605 /var/tmp/spdk2.sock 00:06:14.805 01:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72605 ']' 00:06:14.805 01:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.805 01:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.805 01:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.805 01:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.805 01:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.805 [2024-12-14 01:05:48.388439] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:14.805 [2024-12-14 01:05:48.388697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72605 ] 00:06:15.063 [2024-12-14 01:05:48.547682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.063 [2024-12-14 01:05:48.587584] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.630 01:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.630 01:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:15.630 01:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72605 00:06:15.630 01:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72605 00:06:15.630 01:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.196 01:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72594 00:06:16.196 01:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72594 ']' 00:06:16.196 01:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72594 00:06:16.196 01:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:16.196 01:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.196 01:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72594 00:06:16.196 killing process with pid 72594 00:06:16.196 01:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.196 01:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.196 01:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72594' 00:06:16.196 01:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72594 00:06:16.196 01:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72594 00:06:16.455 01:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72605 00:06:16.455 01:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72605 ']' 00:06:16.455 01:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72605 00:06:16.455 01:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:16.455 01:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.455 01:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72605 00:06:16.455 killing process with pid 72605 00:06:16.455 01:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.455 01:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.455 01:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72605' 00:06:16.455 01:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72605 00:06:16.455 01:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72605 00:06:16.713 ************************************ 00:06:16.713 END TEST locking_app_on_unlocked_coremask 00:06:16.713 ************************************ 00:06:16.713 00:06:16.713 real 0m2.901s 00:06:16.713 user 0m3.208s 00:06:16.713 sys 0m0.764s 00:06:16.714 01:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.714 01:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.972 01:05:50 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:16.972 01:05:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.972 01:05:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.972 01:05:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.972 ************************************ 00:06:16.972 START TEST locking_app_on_locked_coremask 00:06:16.972 ************************************ 00:06:16.972 01:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:16.972 01:05:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72663 00:06:16.972 01:05:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72663 /var/tmp/spdk.sock 00:06:16.972 01:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72663 ']' 00:06:16.972 01:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.972 01:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.972 01:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.972 01:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.972 01:05:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.972 01:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.972 [2024-12-14 01:05:50.434312] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:16.972 [2024-12-14 01:05:50.434433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72663 ] 00:06:16.972 [2024-12-14 01:05:50.579202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.230 [2024-12-14 01:05:50.599428] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.796 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.797 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:17.797 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72679 00:06:17.797 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72679 /var/tmp/spdk2.sock 00:06:17.797 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:17.797 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72679 /var/tmp/spdk2.sock 00:06:17.797 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:17.797 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:17.797 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.797 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:17.797 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.797 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72679 /var/tmp/spdk2.sock 00:06:17.797 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72679 ']' 00:06:17.797 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.797 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.797 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.797 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.797 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.797 [2024-12-14 01:05:51.361902] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:17.797 [2024-12-14 01:05:51.362237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72679 ] 00:06:18.054 [2024-12-14 01:05:51.520567] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72663 has claimed it. 00:06:18.055 [2024-12-14 01:05:51.524649] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:18.620 ERROR: process (pid: 72679) is no longer running 00:06:18.620 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72679) - No such process 00:06:18.620 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.620 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:18.620 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:18.620 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:18.620 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:18.620 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:18.620 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72663 00:06:18.620 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72663 00:06:18.620 01:05:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.620 01:05:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72663 00:06:18.620 01:05:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72663 ']' 00:06:18.620 01:05:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72663 00:06:18.620 01:05:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:18.620 01:05:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.620 01:05:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72663 00:06:18.620 killing process with pid 72663 00:06:18.620 01:05:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.621 01:05:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.621 01:05:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72663' 00:06:18.621 01:05:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72663 00:06:18.621 01:05:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72663 00:06:18.879 ************************************ 00:06:18.879 END TEST locking_app_on_locked_coremask 00:06:18.879 ************************************ 00:06:18.879 00:06:18.879 real 0m2.098s 00:06:18.879 user 0m2.366s 00:06:18.879 sys 0m0.493s 00:06:18.879 01:05:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.879 01:05:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.139 01:05:52 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:19.139 01:05:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.139 01:05:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.139 01:05:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.139 ************************************ 00:06:19.139 START TEST locking_overlapped_coremask 00:06:19.139 ************************************ 00:06:19.139 01:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:19.139 01:05:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:19.139 01:05:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72725 00:06:19.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.139 01:05:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72725 /var/tmp/spdk.sock 00:06:19.139 01:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72725 ']' 00:06:19.139 01:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.139 01:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.139 01:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.139 01:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.139 01:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.139 [2024-12-14 01:05:52.593553] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:19.139 [2024-12-14 01:05:52.593688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72725 ] 00:06:19.139 [2024-12-14 01:05:52.739209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.397 [2024-12-14 01:05:52.762222] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.397 [2024-12-14 01:05:52.762596] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.397 [2024-12-14 01:05:52.762618] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.965 01:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.965 01:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:19.965 01:05:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72739 00:06:19.965 01:05:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72739 /var/tmp/spdk2.sock 00:06:19.965 01:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:19.965 01:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72739 /var/tmp/spdk2.sock 00:06:19.965 01:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:19.965 01:05:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:19.965 01:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.965 01:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:19.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.965 01:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.965 01:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72739 /var/tmp/spdk2.sock 00:06:19.965 01:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72739 ']' 00:06:19.965 01:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.965 01:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.965 01:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.965 01:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.965 01:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.965 [2024-12-14 01:05:53.515372] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:19.965 [2024-12-14 01:05:53.515547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72739 ] 00:06:20.226 [2024-12-14 01:05:53.676550] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72725 has claimed it. 00:06:20.226 [2024-12-14 01:05:53.676613] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.797 ERROR: process (pid: 72739) is no longer running 00:06:20.797 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72739) - No such process 00:06:20.797 01:05:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.797 01:05:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:20.797 01:05:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:20.797 01:05:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:20.797 01:05:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:20.797 01:05:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:20.797 01:05:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:20.797 01:05:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:20.797 01:05:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:20.797 01:05:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:20.798 01:05:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72725 00:06:20.798 01:05:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 72725 ']' 00:06:20.798 01:05:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 72725 00:06:20.798 01:05:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:20.798 01:05:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.798 01:05:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72725 00:06:20.798 01:05:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.798 01:05:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.798 01:05:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72725' 00:06:20.798 killing process with pid 72725 00:06:20.798 01:05:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 72725 00:06:20.798 01:05:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 72725 00:06:21.057 00:06:21.057 real 0m1.965s 00:06:21.057 user 0m5.452s 00:06:21.057 sys 0m0.411s 00:06:21.057 01:05:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.057 01:05:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.057 ************************************ 00:06:21.057 END TEST locking_overlapped_coremask 00:06:21.057 ************************************ 00:06:21.057 01:05:54 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:21.057 01:05:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.057 01:05:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.057 01:05:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.057 ************************************ 00:06:21.057 START TEST locking_overlapped_coremask_via_rpc 00:06:21.057 ************************************ 00:06:21.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.057 01:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:21.057 01:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72781 00:06:21.057 01:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72781 /var/tmp/spdk.sock 00:06:21.057 01:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72781 ']' 00:06:21.057 01:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:21.057 01:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.057 01:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.057 01:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.057 01:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.057 01:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.057 [2024-12-14 01:05:54.618675] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:21.057 [2024-12-14 01:05:54.618786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72781 ] 00:06:21.317 [2024-12-14 01:05:54.765802] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.317 [2024-12-14 01:05:54.765847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.317 [2024-12-14 01:05:54.787942] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.317 [2024-12-14 01:05:54.788190] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.317 [2024-12-14 01:05:54.788210] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.935 01:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.935 01:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:21.935 01:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72799 00:06:21.935 01:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72799 /var/tmp/spdk2.sock 00:06:21.935 01:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72799 ']' 00:06:21.935 01:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.935 01:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.935 01:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.935 01:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:21.935 01:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.935 01:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.935 [2024-12-14 01:05:55.524018] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:21.935 [2024-12-14 01:05:55.524131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72799 ] 00:06:22.206 [2024-12-14 01:05:55.682446] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.206 [2024-12-14 01:05:55.682492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.206 [2024-12-14 01:05:55.723370] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.206 [2024-12-14 01:05:55.730779] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.206 [2024-12-14 01:05:55.730836] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:22.773 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.773 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:22.773 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:22.773 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.773 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.773 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.774 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.774 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:22.774 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.774 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:22.774 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.774 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:22.774 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.774 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.774 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.774 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.774 [2024-12-14 01:05:56.377761] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72781 has claimed it. 00:06:23.032 request: 00:06:23.032 { 00:06:23.032 "method": "framework_enable_cpumask_locks", 00:06:23.032 "req_id": 1 00:06:23.032 } 00:06:23.032 Got JSON-RPC error response 00:06:23.032 response: 00:06:23.032 { 00:06:23.032 "code": -32603, 00:06:23.032 "message": "Failed to claim CPU core: 2" 00:06:23.032 } 00:06:23.032 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:23.032 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:23.032 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:23.032 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:23.032 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:23.032 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72781 /var/tmp/spdk.sock 00:06:23.032 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72781 ']' 00:06:23.032 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.033 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.033 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.033 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.033 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.033 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.033 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:23.033 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72799 /var/tmp/spdk2.sock 00:06:23.033 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72799 ']' 00:06:23.033 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.033 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.033 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.033 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.033 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.292 ************************************ 00:06:23.292 END TEST locking_overlapped_coremask_via_rpc 00:06:23.292 ************************************ 00:06:23.292 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.292 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:23.292 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:23.292 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:23.292 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:23.292 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:23.292 00:06:23.292 real 0m2.265s 00:06:23.292 user 0m1.058s 00:06:23.292 sys 0m0.129s 00:06:23.292 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.292 01:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.292 01:05:56 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:23.292 01:05:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72781 ]] 00:06:23.292 01:05:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72781 00:06:23.292 01:05:56 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72781 ']' 00:06:23.292 01:05:56 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72781 00:06:23.292 01:05:56 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:23.292 01:05:56 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.292 01:05:56 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72781 00:06:23.292 killing process with pid 72781 00:06:23.292 01:05:56 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.292 01:05:56 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.292 01:05:56 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72781' 00:06:23.292 01:05:56 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72781 00:06:23.292 01:05:56 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72781 00:06:23.552 01:05:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72799 ]] 00:06:23.552 01:05:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72799 00:06:23.552 01:05:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72799 ']' 00:06:23.552 01:05:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72799 00:06:23.552 01:05:57 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:23.552 01:05:57 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.552 01:05:57 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72799 00:06:23.811 killing process with pid 72799 00:06:23.811 01:05:57 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:23.811 01:05:57 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:23.811 01:05:57 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72799' 00:06:23.811 01:05:57 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72799 00:06:23.811 01:05:57 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72799 00:06:23.811 01:05:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:23.811 Process with pid 72781 is not found 00:06:23.811 01:05:57 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:23.811 01:05:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72781 ]] 00:06:23.811 01:05:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72781 00:06:23.811 01:05:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72781 ']' 00:06:23.811 01:05:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72781 00:06:23.811 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72781) - No such process 00:06:23.811 01:05:57 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72781 is not found' 00:06:23.811 01:05:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72799 ]] 00:06:23.811 01:05:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72799 00:06:23.811 Process with pid 72799 is not found 00:06:23.811 01:05:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72799 ']' 00:06:23.811 01:05:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72799 00:06:23.811 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72799) - No such process 00:06:23.811 01:05:57 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72799 is not found' 00:06:23.811 01:05:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:23.811 ************************************ 00:06:23.811 END TEST cpu_locks 00:06:23.811 ************************************ 00:06:23.811 00:06:23.811 real 0m15.992s 00:06:23.811 user 0m28.524s 00:06:23.811 sys 0m4.035s 00:06:23.811 01:05:57 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.811 01:05:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.070 ************************************ 00:06:24.070 END TEST event 00:06:24.070 ************************************ 00:06:24.070 00:06:24.070 real 0m41.521s 00:06:24.070 user 1m20.675s 00:06:24.070 sys 0m6.973s 00:06:24.070 01:05:57 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.070 01:05:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.070 01:05:57 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:24.070 01:05:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.070 01:05:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.070 01:05:57 -- common/autotest_common.sh@10 -- # set +x 00:06:24.070 ************************************ 00:06:24.070 START TEST thread 00:06:24.070 ************************************ 00:06:24.070 01:05:57 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:24.070 * Looking for test storage... 00:06:24.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:24.070 01:05:57 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:24.070 01:05:57 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:24.070 01:05:57 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:24.070 01:05:57 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:24.070 01:05:57 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.070 01:05:57 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.070 01:05:57 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.070 01:05:57 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.070 01:05:57 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.070 01:05:57 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.070 01:05:57 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.070 01:05:57 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.070 01:05:57 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.070 01:05:57 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.070 01:05:57 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.070 01:05:57 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:24.070 01:05:57 thread -- scripts/common.sh@345 -- # : 1 00:06:24.070 01:05:57 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.070 01:05:57 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.070 01:05:57 thread -- scripts/common.sh@365 -- # decimal 1 00:06:24.071 01:05:57 thread -- scripts/common.sh@353 -- # local d=1 00:06:24.071 01:05:57 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.071 01:05:57 thread -- scripts/common.sh@355 -- # echo 1 00:06:24.071 01:05:57 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.071 01:05:57 thread -- scripts/common.sh@366 -- # decimal 2 00:06:24.071 01:05:57 thread -- scripts/common.sh@353 -- # local d=2 00:06:24.071 01:05:57 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.071 01:05:57 thread -- scripts/common.sh@355 -- # echo 2 00:06:24.071 01:05:57 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.071 01:05:57 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.071 01:05:57 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.071 01:05:57 thread -- scripts/common.sh@368 -- # return 0 00:06:24.071 01:05:57 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.071 01:05:57 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:24.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.071 --rc genhtml_branch_coverage=1 00:06:24.071 --rc genhtml_function_coverage=1 00:06:24.071 --rc genhtml_legend=1 00:06:24.071 --rc geninfo_all_blocks=1 00:06:24.071 --rc geninfo_unexecuted_blocks=1 00:06:24.071 00:06:24.071 ' 00:06:24.071 01:05:57 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:24.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.071 --rc genhtml_branch_coverage=1 00:06:24.071 --rc genhtml_function_coverage=1 00:06:24.071 --rc genhtml_legend=1 00:06:24.071 --rc geninfo_all_blocks=1 00:06:24.071 --rc geninfo_unexecuted_blocks=1 00:06:24.071 00:06:24.071 ' 00:06:24.071 01:05:57 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:24.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.071 --rc genhtml_branch_coverage=1 00:06:24.071 --rc genhtml_function_coverage=1 00:06:24.071 --rc genhtml_legend=1 00:06:24.071 --rc geninfo_all_blocks=1 00:06:24.071 --rc geninfo_unexecuted_blocks=1 00:06:24.071 00:06:24.071 ' 00:06:24.071 01:05:57 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:24.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.071 --rc genhtml_branch_coverage=1 00:06:24.071 --rc genhtml_function_coverage=1 00:06:24.071 --rc genhtml_legend=1 00:06:24.071 --rc geninfo_all_blocks=1 00:06:24.071 --rc geninfo_unexecuted_blocks=1 00:06:24.071 00:06:24.071 ' 00:06:24.071 01:05:57 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.071 01:05:57 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:24.071 01:05:57 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.071 01:05:57 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.071 ************************************ 00:06:24.071 START TEST thread_poller_perf 00:06:24.071 ************************************ 00:06:24.071 01:05:57 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.071 [2024-12-14 01:05:57.667370] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:24.071 [2024-12-14 01:05:57.667600] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72926 ] 00:06:24.330 [2024-12-14 01:05:57.811745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.330 [2024-12-14 01:05:57.832309] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.330 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:25.707 [2024-12-14T01:05:59.319Z] ====================================== 00:06:25.707 [2024-12-14T01:05:59.319Z] busy:2611085764 (cyc) 00:06:25.707 [2024-12-14T01:05:59.319Z] total_run_count: 303000 00:06:25.707 [2024-12-14T01:05:59.319Z] tsc_hz: 2600000000 (cyc) 00:06:25.707 [2024-12-14T01:05:59.319Z] ====================================== 00:06:25.707 [2024-12-14T01:05:59.319Z] poller_cost: 8617 (cyc), 3314 (nsec) 00:06:25.707 00:06:25.707 real 0m1.243s 00:06:25.707 user 0m1.082s 00:06:25.707 sys 0m0.053s 00:06:25.707 01:05:58 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.707 01:05:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:25.707 ************************************ 00:06:25.707 END TEST thread_poller_perf 00:06:25.707 ************************************ 00:06:25.707 01:05:58 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:25.707 01:05:58 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:25.707 01:05:58 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.707 01:05:58 thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.707 ************************************ 00:06:25.707 START TEST thread_poller_perf 00:06:25.707 ************************************ 00:06:25.707 01:05:58 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:25.707 [2024-12-14 01:05:58.960733] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:25.707 [2024-12-14 01:05:58.960846] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72968 ] 00:06:25.707 [2024-12-14 01:05:59.105657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.707 [2024-12-14 01:05:59.125149] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.707 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:26.642 [2024-12-14T01:06:00.254Z] ====================================== 00:06:26.642 [2024-12-14T01:06:00.254Z] busy:2603277388 (cyc) 00:06:26.642 [2024-12-14T01:06:00.254Z] total_run_count: 3612000 00:06:26.642 [2024-12-14T01:06:00.254Z] tsc_hz: 2600000000 (cyc) 00:06:26.642 [2024-12-14T01:06:00.254Z] ====================================== 00:06:26.642 [2024-12-14T01:06:00.254Z] poller_cost: 720 (cyc), 276 (nsec) 00:06:26.642 ************************************ 00:06:26.642 END TEST thread_poller_perf 00:06:26.642 ************************************ 00:06:26.642 00:06:26.642 real 0m1.236s 00:06:26.642 user 0m1.083s 00:06:26.642 sys 0m0.047s 00:06:26.642 01:06:00 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.642 01:06:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.642 01:06:00 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:26.642 ************************************ 00:06:26.642 END TEST thread 00:06:26.642 ************************************ 00:06:26.642 00:06:26.642 real 0m2.712s 00:06:26.642 user 0m2.279s 00:06:26.642 sys 0m0.210s 00:06:26.642 01:06:00 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.642 01:06:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.642 01:06:00 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:26.642 01:06:00 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:26.642 01:06:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.642 01:06:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.642 01:06:00 -- common/autotest_common.sh@10 -- # set +x 00:06:26.642 ************************************ 00:06:26.642 START TEST app_cmdline 00:06:26.642 ************************************ 00:06:26.642 01:06:00 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:26.901 * Looking for test storage... 00:06:26.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:26.901 01:06:00 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.901 01:06:00 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.901 01:06:00 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.901 01:06:00 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.901 01:06:00 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:26.901 01:06:00 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.901 01:06:00 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:26.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.901 --rc genhtml_branch_coverage=1 00:06:26.901 --rc genhtml_function_coverage=1 00:06:26.901 --rc genhtml_legend=1 00:06:26.901 --rc geninfo_all_blocks=1 00:06:26.901 --rc geninfo_unexecuted_blocks=1 00:06:26.901 00:06:26.901 ' 00:06:26.901 01:06:00 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:26.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.901 --rc genhtml_branch_coverage=1 00:06:26.901 --rc genhtml_function_coverage=1 00:06:26.901 --rc genhtml_legend=1 00:06:26.901 --rc geninfo_all_blocks=1 00:06:26.901 --rc geninfo_unexecuted_blocks=1 00:06:26.901 00:06:26.901 ' 00:06:26.901 01:06:00 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:26.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.901 --rc genhtml_branch_coverage=1 00:06:26.901 --rc genhtml_function_coverage=1 00:06:26.901 --rc genhtml_legend=1 00:06:26.901 --rc geninfo_all_blocks=1 00:06:26.901 --rc geninfo_unexecuted_blocks=1 00:06:26.901 00:06:26.901 ' 00:06:26.901 01:06:00 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:26.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.901 --rc genhtml_branch_coverage=1 00:06:26.901 --rc genhtml_function_coverage=1 00:06:26.901 --rc genhtml_legend=1 00:06:26.901 --rc geninfo_all_blocks=1 00:06:26.901 --rc geninfo_unexecuted_blocks=1 00:06:26.901 00:06:26.901 ' 00:06:26.901 01:06:00 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:26.901 01:06:00 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=73046 00:06:26.901 01:06:00 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 73046 00:06:26.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.901 01:06:00 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 73046 ']' 00:06:26.901 01:06:00 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.901 01:06:00 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.901 01:06:00 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:26.901 01:06:00 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.901 01:06:00 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.901 01:06:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:26.901 [2024-12-14 01:06:00.454346] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:26.901 [2024-12-14 01:06:00.454451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73046 ] 00:06:27.160 [2024-12-14 01:06:00.599529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.160 [2024-12-14 01:06:00.619780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.727 01:06:01 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.727 01:06:01 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:27.727 01:06:01 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:27.986 { 00:06:27.986 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:27.986 "fields": { 00:06:27.986 "major": 25, 00:06:27.986 "minor": 1, 00:06:27.986 "patch": 0, 00:06:27.986 "suffix": "-pre", 00:06:27.986 "commit": "e01cb43b8" 00:06:27.986 } 00:06:27.986 } 00:06:27.986 01:06:01 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:27.986 01:06:01 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:27.986 01:06:01 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:27.986 01:06:01 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:27.986 01:06:01 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:27.986 01:06:01 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:27.986 01:06:01 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:27.986 01:06:01 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.986 01:06:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.986 01:06:01 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.986 01:06:01 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:27.986 01:06:01 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:27.986 01:06:01 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.986 01:06:01 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:27.986 01:06:01 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.986 01:06:01 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:27.986 01:06:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.986 01:06:01 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:27.986 01:06:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.986 01:06:01 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:27.986 01:06:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.986 01:06:01 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:27.986 01:06:01 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:27.986 01:06:01 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.244 request: 00:06:28.244 { 00:06:28.244 "method": "env_dpdk_get_mem_stats", 00:06:28.244 "req_id": 1 00:06:28.244 } 00:06:28.244 Got JSON-RPC error response 00:06:28.244 response: 00:06:28.244 { 00:06:28.244 "code": -32601, 00:06:28.244 "message": "Method not found" 00:06:28.244 } 00:06:28.244 01:06:01 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:28.244 01:06:01 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.244 01:06:01 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:28.244 01:06:01 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.244 01:06:01 app_cmdline -- app/cmdline.sh@1 -- # killprocess 73046 00:06:28.244 01:06:01 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 73046 ']' 00:06:28.244 01:06:01 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 73046 00:06:28.244 01:06:01 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:28.244 01:06:01 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.244 01:06:01 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73046 00:06:28.244 killing process with pid 73046 00:06:28.244 01:06:01 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.244 01:06:01 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.244 01:06:01 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73046' 00:06:28.244 01:06:01 app_cmdline -- common/autotest_common.sh@973 -- # kill 73046 00:06:28.244 01:06:01 app_cmdline -- common/autotest_common.sh@978 -- # wait 73046 00:06:28.503 00:06:28.503 real 0m1.750s 00:06:28.503 user 0m2.090s 00:06:28.503 sys 0m0.384s 00:06:28.503 ************************************ 00:06:28.504 END TEST app_cmdline 00:06:28.504 ************************************ 00:06:28.504 01:06:01 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.504 01:06:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.504 01:06:02 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:28.504 01:06:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.504 01:06:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.504 01:06:02 -- common/autotest_common.sh@10 -- # set +x 00:06:28.504 ************************************ 00:06:28.504 START TEST version 00:06:28.504 ************************************ 00:06:28.504 01:06:02 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:28.504 * Looking for test storage... 00:06:28.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:28.504 01:06:02 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:28.504 01:06:02 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:28.504 01:06:02 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:28.763 01:06:02 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:28.763 01:06:02 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.763 01:06:02 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.763 01:06:02 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.763 01:06:02 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.763 01:06:02 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.763 01:06:02 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.763 01:06:02 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.763 01:06:02 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.763 01:06:02 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.763 01:06:02 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.763 01:06:02 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.763 01:06:02 version -- scripts/common.sh@344 -- # case "$op" in 00:06:28.763 01:06:02 version -- scripts/common.sh@345 -- # : 1 00:06:28.763 01:06:02 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.763 01:06:02 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.763 01:06:02 version -- scripts/common.sh@365 -- # decimal 1 00:06:28.763 01:06:02 version -- scripts/common.sh@353 -- # local d=1 00:06:28.763 01:06:02 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.763 01:06:02 version -- scripts/common.sh@355 -- # echo 1 00:06:28.763 01:06:02 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.763 01:06:02 version -- scripts/common.sh@366 -- # decimal 2 00:06:28.763 01:06:02 version -- scripts/common.sh@353 -- # local d=2 00:06:28.763 01:06:02 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.763 01:06:02 version -- scripts/common.sh@355 -- # echo 2 00:06:28.763 01:06:02 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.763 01:06:02 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.763 01:06:02 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.763 01:06:02 version -- scripts/common.sh@368 -- # return 0 00:06:28.763 01:06:02 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.763 01:06:02 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:28.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.763 --rc genhtml_branch_coverage=1 00:06:28.763 --rc genhtml_function_coverage=1 00:06:28.763 --rc genhtml_legend=1 00:06:28.763 --rc geninfo_all_blocks=1 00:06:28.763 --rc geninfo_unexecuted_blocks=1 00:06:28.763 00:06:28.763 ' 00:06:28.763 01:06:02 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:28.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.763 --rc genhtml_branch_coverage=1 00:06:28.763 --rc genhtml_function_coverage=1 00:06:28.763 --rc genhtml_legend=1 00:06:28.763 --rc geninfo_all_blocks=1 00:06:28.763 --rc geninfo_unexecuted_blocks=1 00:06:28.763 00:06:28.763 ' 00:06:28.763 01:06:02 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:28.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.763 --rc genhtml_branch_coverage=1 00:06:28.763 --rc genhtml_function_coverage=1 00:06:28.763 --rc genhtml_legend=1 00:06:28.763 --rc geninfo_all_blocks=1 00:06:28.763 --rc geninfo_unexecuted_blocks=1 00:06:28.763 00:06:28.763 ' 00:06:28.763 01:06:02 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:28.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.763 --rc genhtml_branch_coverage=1 00:06:28.763 --rc genhtml_function_coverage=1 00:06:28.763 --rc genhtml_legend=1 00:06:28.763 --rc geninfo_all_blocks=1 00:06:28.763 --rc geninfo_unexecuted_blocks=1 00:06:28.763 00:06:28.763 ' 00:06:28.763 01:06:02 version -- app/version.sh@17 -- # get_header_version major 00:06:28.763 01:06:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.763 01:06:02 version -- app/version.sh@14 -- # cut -f2 00:06:28.763 01:06:02 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.763 01:06:02 version -- app/version.sh@17 -- # major=25 00:06:28.763 01:06:02 version -- app/version.sh@18 -- # get_header_version minor 00:06:28.763 01:06:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.763 01:06:02 version -- app/version.sh@14 -- # cut -f2 00:06:28.763 01:06:02 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.763 01:06:02 version -- app/version.sh@18 -- # minor=1 00:06:28.763 01:06:02 version -- app/version.sh@19 -- # get_header_version patch 00:06:28.763 01:06:02 version -- app/version.sh@14 -- # cut -f2 00:06:28.763 01:06:02 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.763 01:06:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.763 01:06:02 version -- app/version.sh@19 -- # patch=0 00:06:28.763 01:06:02 version -- app/version.sh@20 -- # get_header_version suffix 00:06:28.763 01:06:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.763 01:06:02 version -- app/version.sh@14 -- # cut -f2 00:06:28.763 01:06:02 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.763 01:06:02 version -- app/version.sh@20 -- # suffix=-pre 00:06:28.763 01:06:02 version -- app/version.sh@22 -- # version=25.1 00:06:28.763 01:06:02 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:28.763 01:06:02 version -- app/version.sh@28 -- # version=25.1rc0 00:06:28.763 01:06:02 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:28.763 01:06:02 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:28.763 01:06:02 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:28.763 01:06:02 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:28.763 00:06:28.763 real 0m0.187s 00:06:28.763 user 0m0.122s 00:06:28.763 sys 0m0.092s 00:06:28.763 ************************************ 00:06:28.763 END TEST version 00:06:28.763 ************************************ 00:06:28.763 01:06:02 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.764 01:06:02 version -- common/autotest_common.sh@10 -- # set +x 00:06:28.764 01:06:02 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:28.764 01:06:02 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:28.764 01:06:02 -- spdk/autotest.sh@194 -- # uname -s 00:06:28.764 01:06:02 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:28.764 01:06:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:28.764 01:06:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:28.764 01:06:02 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:28.764 01:06:02 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:28.764 01:06:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:28.764 01:06:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.764 01:06:02 -- common/autotest_common.sh@10 -- # set +x 00:06:28.764 ************************************ 00:06:28.764 START TEST blockdev_nvme 00:06:28.764 ************************************ 00:06:28.764 01:06:02 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:28.764 * Looking for test storage... 00:06:28.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:28.764 01:06:02 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:28.764 01:06:02 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:06:28.764 01:06:02 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:29.022 01:06:02 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:29.022 01:06:02 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.022 01:06:02 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.022 01:06:02 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.022 01:06:02 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.022 01:06:02 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.022 01:06:02 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.022 01:06:02 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.022 01:06:02 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.022 01:06:02 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.022 01:06:02 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.022 01:06:02 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.022 01:06:02 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:29.022 01:06:02 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:29.022 01:06:02 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.022 01:06:02 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.023 01:06:02 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:29.023 01:06:02 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:29.023 01:06:02 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.023 01:06:02 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:29.023 01:06:02 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.023 01:06:02 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:29.023 01:06:02 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:29.023 01:06:02 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.023 01:06:02 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:29.023 01:06:02 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.023 01:06:02 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.023 01:06:02 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.023 01:06:02 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:29.023 01:06:02 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.023 01:06:02 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:29.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.023 --rc genhtml_branch_coverage=1 00:06:29.023 --rc genhtml_function_coverage=1 00:06:29.023 --rc genhtml_legend=1 00:06:29.023 --rc geninfo_all_blocks=1 00:06:29.023 --rc geninfo_unexecuted_blocks=1 00:06:29.023 00:06:29.023 ' 00:06:29.023 01:06:02 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:29.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.023 --rc genhtml_branch_coverage=1 00:06:29.023 --rc genhtml_function_coverage=1 00:06:29.023 --rc genhtml_legend=1 00:06:29.023 --rc geninfo_all_blocks=1 00:06:29.023 --rc geninfo_unexecuted_blocks=1 00:06:29.023 00:06:29.023 ' 00:06:29.023 01:06:02 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:29.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.023 --rc genhtml_branch_coverage=1 00:06:29.023 --rc genhtml_function_coverage=1 00:06:29.023 --rc genhtml_legend=1 00:06:29.023 --rc geninfo_all_blocks=1 00:06:29.023 --rc geninfo_unexecuted_blocks=1 00:06:29.023 00:06:29.023 ' 00:06:29.023 01:06:02 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:29.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.023 --rc genhtml_branch_coverage=1 00:06:29.023 --rc genhtml_function_coverage=1 00:06:29.023 --rc genhtml_legend=1 00:06:29.023 --rc geninfo_all_blocks=1 00:06:29.023 --rc geninfo_unexecuted_blocks=1 00:06:29.023 00:06:29.023 ' 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:29.023 01:06:02 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73207 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 73207 00:06:29.023 01:06:02 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 73207 ']' 00:06:29.023 01:06:02 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.023 01:06:02 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.023 01:06:02 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.023 01:06:02 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.023 01:06:02 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:29.023 01:06:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:29.023 [2024-12-14 01:06:02.468764] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:29.023 [2024-12-14 01:06:02.468993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73207 ] 00:06:29.023 [2024-12-14 01:06:02.608554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.023 [2024-12-14 01:06:02.628454] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.956 01:06:03 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.956 01:06:03 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:29.956 01:06:03 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:29.956 01:06:03 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:06:29.956 01:06:03 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:29.956 01:06:03 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:29.956 01:06:03 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:29.957 01:06:03 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:29.957 01:06:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.957 01:06:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:30.218 01:06:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.218 01:06:03 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:30.218 01:06:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.218 01:06:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:30.218 01:06:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.218 01:06:03 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:06:30.218 01:06:03 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:30.218 01:06:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.218 01:06:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:30.218 01:06:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.218 01:06:03 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:30.218 01:06:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.218 01:06:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:30.219 01:06:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.219 01:06:03 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:30.219 01:06:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.219 01:06:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:30.219 01:06:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.219 01:06:03 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:30.219 01:06:03 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:30.219 01:06:03 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:30.219 01:06:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.219 01:06:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:30.219 01:06:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.219 01:06:03 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:30.219 01:06:03 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "0574d035-fd73-4c01-b126-989b29f96bd5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0574d035-fd73-4c01-b126-989b29f96bd5",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "5eca4697-2f37-4254-9bcd-6af33484b088"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5eca4697-2f37-4254-9bcd-6af33484b088",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "b73b731e-7b25-49ca-8216-cdf65ebb2693"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b73b731e-7b25-49ca-8216-cdf65ebb2693",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "cf9a8bce-dd53-424b-8264-ae71d0cbb9e5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cf9a8bce-dd53-424b-8264-ae71d0cbb9e5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "ed40ace7-c286-4e2e-a3f6-52c839ee8803"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ed40ace7-c286-4e2e-a3f6-52c839ee8803",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "0ae090a4-e7d5-40aa-bea8-6f8866f0faa4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0ae090a4-e7d5-40aa-bea8-6f8866f0faa4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:30.219 01:06:03 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:30.219 01:06:03 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:30.219 01:06:03 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:30.219 01:06:03 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:30.219 01:06:03 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 73207 00:06:30.219 01:06:03 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 73207 ']' 00:06:30.219 01:06:03 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 73207 00:06:30.219 01:06:03 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:30.219 01:06:03 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.219 01:06:03 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73207 00:06:30.219 killing process with pid 73207 00:06:30.219 01:06:03 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.219 01:06:03 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.219 01:06:03 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73207' 00:06:30.219 01:06:03 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 73207 00:06:30.220 01:06:03 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 73207 00:06:30.478 01:06:04 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:30.478 01:06:04 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:30.478 01:06:04 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:30.478 01:06:04 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.478 01:06:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:30.478 ************************************ 00:06:30.478 START TEST bdev_hello_world 00:06:30.478 ************************************ 00:06:30.478 01:06:04 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:30.735 [2024-12-14 01:06:04.140003] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:30.735 [2024-12-14 01:06:04.140459] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73280 ] 00:06:30.735 [2024-12-14 01:06:04.285849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.735 [2024-12-14 01:06:04.305817] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.301 [2024-12-14 01:06:04.678450] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:31.301 [2024-12-14 01:06:04.678498] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:31.301 [2024-12-14 01:06:04.678517] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:31.301 [2024-12-14 01:06:04.680586] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:31.301 [2024-12-14 01:06:04.681002] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:31.301 [2024-12-14 01:06:04.681028] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:31.301 [2024-12-14 01:06:04.681245] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:31.301 00:06:31.301 [2024-12-14 01:06:04.681261] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:31.301 00:06:31.301 real 0m0.742s 00:06:31.301 user 0m0.491s 00:06:31.301 sys 0m0.148s 00:06:31.301 ************************************ 00:06:31.301 END TEST bdev_hello_world 00:06:31.301 ************************************ 00:06:31.301 01:06:04 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.301 01:06:04 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:31.301 01:06:04 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:31.301 01:06:04 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:31.301 01:06:04 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.301 01:06:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.301 ************************************ 00:06:31.301 START TEST bdev_bounds 00:06:31.301 ************************************ 00:06:31.301 01:06:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:31.301 Process bdevio pid: 73306 00:06:31.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.301 01:06:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73306 00:06:31.301 01:06:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:31.301 01:06:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73306' 00:06:31.301 01:06:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73306 00:06:31.301 01:06:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73306 ']' 00:06:31.301 01:06:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.301 01:06:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.301 01:06:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.301 01:06:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:31.301 01:06:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.301 01:06:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:31.633 [2024-12-14 01:06:04.918723] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:31.633 [2024-12-14 01:06:04.918828] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73306 ] 00:06:31.633 [2024-12-14 01:06:05.063386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.633 [2024-12-14 01:06:05.085745] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.633 [2024-12-14 01:06:05.085981] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.633 [2024-12-14 01:06:05.086071] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.199 01:06:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.199 01:06:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:32.199 01:06:05 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:32.459 I/O targets: 00:06:32.459 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:32.459 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:32.459 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:32.459 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:32.459 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:32.459 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:32.459 00:06:32.459 00:06:32.459 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.459 http://cunit.sourceforge.net/ 00:06:32.459 00:06:32.459 00:06:32.459 Suite: bdevio tests on: Nvme3n1 00:06:32.459 Test: blockdev write read block ...passed 00:06:32.459 Test: blockdev write zeroes read block ...passed 00:06:32.459 Test: blockdev write zeroes read no split ...passed 00:06:32.459 Test: blockdev write zeroes read split ...passed 00:06:32.459 Test: blockdev write zeroes read split partial ...passed 00:06:32.459 Test: blockdev reset ...[2024-12-14 01:06:05.869260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:32.459 passed 00:06:32.459 Test: blockdev write read 8 blocks ...[2024-12-14 01:06:05.870966] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:32.459 passed 00:06:32.459 Test: blockdev write read size > 128k ...passed 00:06:32.459 Test: blockdev write read invalid size ...passed 00:06:32.459 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:32.459 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:32.459 Test: blockdev write read max offset ...passed 00:06:32.459 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:32.459 Test: blockdev writev readv 8 blocks ...passed 00:06:32.459 Test: blockdev writev readv 30 x 1block ...passed 00:06:32.459 Test: blockdev writev readv block ...passed 00:06:32.459 Test: blockdev writev readv size > 128k ...passed 00:06:32.459 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:32.459 Test: blockdev comparev and writev ...[2024-12-14 01:06:05.875002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf806000 len:0x1000 00:06:32.459 [2024-12-14 01:06:05.875055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:32.459 passed 00:06:32.459 Test: blockdev nvme passthru rw ...passed 00:06:32.459 Test: blockdev nvme passthru vendor specific ...passed 00:06:32.459 Test: blockdev nvme admin passthru ...[2024-12-14 01:06:05.875556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:32.459 [2024-12-14 01:06:05.875590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:32.459 passed 00:06:32.459 Test: blockdev copy ...passed 00:06:32.459 Suite: bdevio tests on: Nvme2n3 00:06:32.459 Test: blockdev write read block ...passed 00:06:32.459 Test: blockdev write zeroes read block ...passed 00:06:32.459 Test: blockdev write zeroes read no split ...passed 00:06:32.459 Test: blockdev write zeroes read split ...passed 00:06:32.459 Test: blockdev write zeroes read split partial ...passed 00:06:32.459 Test: blockdev reset ...[2024-12-14 01:06:05.891497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:32.459 passed 00:06:32.459 Test: blockdev write read 8 blocks ...[2024-12-14 01:06:05.893368] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:32.459 passed 00:06:32.459 Test: blockdev write read size > 128k ...passed 00:06:32.459 Test: blockdev write read invalid size ...passed 00:06:32.459 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:32.459 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:32.459 Test: blockdev write read max offset ...passed 00:06:32.459 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:32.459 Test: blockdev writev readv 8 blocks ...passed 00:06:32.459 Test: blockdev writev readv 30 x 1block ...passed 00:06:32.459 Test: blockdev writev readv block ...passed 00:06:32.459 Test: blockdev writev readv size > 128k ...passed 00:06:32.459 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:32.459 Test: blockdev comparev and writev ...[2024-12-14 01:06:05.897538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bd402000 len:0x1000 00:06:32.459 [2024-12-14 01:06:05.897576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:32.459 passed 00:06:32.459 Test: blockdev nvme passthru rw ...passed 00:06:32.459 Test: blockdev nvme passthru vendor specific ...passed 00:06:32.459 Test: blockdev nvme admin passthru ...[2024-12-14 01:06:05.898018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:32.459 [2024-12-14 01:06:05.898041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:32.459 passed 00:06:32.459 Test: blockdev copy ...passed 00:06:32.460 Suite: bdevio tests on: Nvme2n2 00:06:32.460 Test: blockdev write read block ...passed 00:06:32.460 Test: blockdev write zeroes read block ...passed 00:06:32.460 Test: blockdev write zeroes read no split ...passed 00:06:32.460 Test: blockdev write zeroes read split ...passed 00:06:32.460 Test: blockdev write zeroes read split partial ...passed 00:06:32.460 Test: blockdev reset ...[2024-12-14 01:06:05.914209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:32.460 [2024-12-14 01:06:05.916032] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:32.460 Test: blockdev write read 8 blocks ...passed 00:06:32.460 Test: blockdev write read size > 128k ...uccessful. 00:06:32.460 passed 00:06:32.460 Test: blockdev write read invalid size ...passed 00:06:32.460 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:32.460 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:32.460 Test: blockdev write read max offset ...passed 00:06:32.460 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:32.460 Test: blockdev writev readv 8 blocks ...passed 00:06:32.460 Test: blockdev writev readv 30 x 1block ...passed 00:06:32.460 Test: blockdev writev readv block ...passed 00:06:32.460 Test: blockdev writev readv size > 128k ...passed 00:06:32.460 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:32.460 Test: blockdev comparev and writev ...[2024-12-14 01:06:05.919721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:06:32.460 Test: blockdev nvme passthru rw ...passed 00:06:32.460 Test: blockdev nvme passthru vendor specific ...passed 00:06:32.460 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x2d483b000 len:0x1000 00:06:32.460 [2024-12-14 01:06:05.919848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:32.460 [2024-12-14 01:06:05.920236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:32.460 [2024-12-14 01:06:05.920259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:32.460 passed 00:06:32.460 Test: blockdev copy ...passed 00:06:32.460 Suite: bdevio tests on: Nvme2n1 00:06:32.460 Test: blockdev write read block ...passed 00:06:32.460 Test: blockdev write zeroes read block ...passed 00:06:32.460 Test: blockdev write zeroes read no split ...passed 00:06:32.460 Test: blockdev write zeroes read split ...passed 00:06:32.460 Test: blockdev write zeroes read split partial ...passed 00:06:32.460 Test: blockdev reset ...[2024-12-14 01:06:05.935901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:32.460 passed 00:06:32.460 Test: blockdev write read 8 blocks ...[2024-12-14 01:06:05.937690] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:32.460 passed 00:06:32.460 Test: blockdev write read size > 128k ...passed 00:06:32.460 Test: blockdev write read invalid size ...passed 00:06:32.460 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:32.460 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:32.460 Test: blockdev write read max offset ...passed 00:06:32.460 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:32.460 Test: blockdev writev readv 8 blocks ...passed 00:06:32.460 Test: blockdev writev readv 30 x 1block ...passed 00:06:32.460 Test: blockdev writev readv block ...passed 00:06:32.460 Test: blockdev writev readv size > 128k ...passed 00:06:32.460 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:32.460 Test: blockdev comparev and writev ...[2024-12-14 01:06:05.941519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4837000 len:0x1000 00:06:32.460 [2024-12-14 01:06:05.941555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:32.460 passed 00:06:32.460 Test: blockdev nvme passthru rw ...passed 00:06:32.460 Test: blockdev nvme passthru vendor specific ...passed 00:06:32.460 Test: blockdev nvme admin passthru ...[2024-12-14 01:06:05.941992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:32.460 [2024-12-14 01:06:05.942018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:32.460 passed 00:06:32.460 Test: blockdev copy ...passed 00:06:32.460 Suite: bdevio tests on: Nvme1n1 00:06:32.460 Test: blockdev write read block ...passed 00:06:32.460 Test: blockdev write zeroes read block ...passed 00:06:32.460 Test: blockdev write zeroes read no split ...passed 00:06:32.460 Test: blockdev write zeroes read split ...passed 00:06:32.460 Test: blockdev write zeroes read split partial ...passed 00:06:32.460 Test: blockdev reset ...[2024-12-14 01:06:05.957674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:32.460 [2024-12-14 01:06:05.959148] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:06:32.460 Test: blockdev write read 8 blocks ...passed 00:06:32.460 Test: blockdev write read size > 128k ...uccessful. 00:06:32.460 passed 00:06:32.460 Test: blockdev write read invalid size ...passed 00:06:32.460 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:32.460 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:32.460 Test: blockdev write read max offset ...passed 00:06:32.460 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:32.460 Test: blockdev writev readv 8 blocks ...passed 00:06:32.460 Test: blockdev writev readv 30 x 1block ...passed 00:06:32.460 Test: blockdev writev readv block ...passed 00:06:32.460 Test: blockdev writev readv size > 128k ...passed 00:06:32.460 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:32.460 Test: blockdev comparev and writev ...[2024-12-14 01:06:05.963299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:32.460 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d4833000 len:0x1000 00:06:32.460 [2024-12-14 01:06:05.963407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:32.460 passed 00:06:32.460 Test: blockdev nvme passthru vendor specific ...passed 00:06:32.460 Test: blockdev nvme admin passthru ...[2024-12-14 01:06:05.964205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:32.460 [2024-12-14 01:06:05.964235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:32.460 passed 00:06:32.460 Test: blockdev copy ...passed 00:06:32.460 Suite: bdevio tests on: Nvme0n1 00:06:32.460 Test: blockdev write read block ...passed 00:06:32.460 Test: blockdev write zeroes read block ...passed 00:06:32.460 Test: blockdev write zeroes read no split ...passed 00:06:32.460 Test: blockdev write zeroes read split ...passed 00:06:32.460 Test: blockdev write zeroes read split partial ...passed 00:06:32.460 Test: blockdev reset ...[2024-12-14 01:06:05.979568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:32.460 passed 00:06:32.460 Test: blockdev write read 8 blocks ...[2024-12-14 01:06:05.981083] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:32.460 passed 00:06:32.460 Test: blockdev write read size > 128k ...passed 00:06:32.460 Test: blockdev write read invalid size ...passed 00:06:32.460 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:32.460 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:32.460 Test: blockdev write read max offset ...passed 00:06:32.460 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:32.460 Test: blockdev writev readv 8 blocks ...passed 00:06:32.460 Test: blockdev writev readv 30 x 1block ...passed 00:06:32.460 Test: blockdev writev readv block ...passed 00:06:32.460 Test: blockdev writev readv size > 128k ...passed 00:06:32.460 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:32.460 Test: blockdev comparev and writev ...passed 00:06:32.460 Test: blockdev nvme passthru rw ...[2024-12-14 01:06:05.984422] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:32.460 separate metadata which is not supported yet. 00:06:32.460 passed 00:06:32.460 Test: blockdev nvme passthru vendor specific ...passed 00:06:32.460 Test: blockdev nvme admin passthru ...[2024-12-14 01:06:05.984718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:32.460 [2024-12-14 01:06:05.984752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:32.460 passed 00:06:32.460 Test: blockdev copy ...passed 00:06:32.460 00:06:32.460 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.460 suites 6 6 n/a 0 0 00:06:32.460 tests 138 138 138 0 0 00:06:32.460 asserts 893 893 893 0 n/a 00:06:32.460 00:06:32.460 Elapsed time = 0.305 seconds 00:06:32.460 0 00:06:32.460 01:06:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73306 00:06:32.460 01:06:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73306 ']' 00:06:32.460 01:06:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73306 00:06:32.461 01:06:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:32.461 01:06:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.461 01:06:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73306 00:06:32.461 01:06:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.461 01:06:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.461 01:06:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73306' 00:06:32.461 killing process with pid 73306 00:06:32.461 01:06:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73306 00:06:32.461 01:06:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73306 00:06:32.721 01:06:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:32.721 00:06:32.721 real 0m1.300s 00:06:32.721 user 0m3.392s 00:06:32.721 sys 0m0.239s 00:06:32.721 01:06:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.721 01:06:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:32.721 ************************************ 00:06:32.721 END TEST bdev_bounds 00:06:32.721 ************************************ 00:06:32.721 01:06:06 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:32.721 01:06:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:32.721 01:06:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.721 01:06:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.721 ************************************ 00:06:32.721 START TEST bdev_nbd 00:06:32.721 ************************************ 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73354 00:06:32.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73354 /var/tmp/spdk-nbd.sock 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73354 ']' 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:32.721 01:06:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:32.721 [2024-12-14 01:06:06.283163] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:32.721 [2024-12-14 01:06:06.283317] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.981 [2024-12-14 01:06:06.436129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.981 [2024-12-14 01:06:06.455569] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.550 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.550 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:33.550 01:06:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:33.550 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.550 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:33.550 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:33.550 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:33.550 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.550 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:33.550 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:33.550 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:33.550 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:33.550 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:33.550 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:33.550 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:33.811 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:33.811 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:33.812 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:33.812 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:33.812 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:33.812 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:33.812 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:33.812 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:33.812 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:33.812 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:33.812 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:33.812 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:33.812 1+0 records in 00:06:33.812 1+0 records out 00:06:33.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348041 s, 11.8 MB/s 00:06:33.812 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.812 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:33.812 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.812 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:33.812 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:33.812 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:33.812 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:33.812 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:34.070 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:34.070 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:34.070 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:34.070 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:34.070 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:34.070 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.070 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.070 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:34.070 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:34.070 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:34.070 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:34.070 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:34.070 1+0 records in 00:06:34.070 1+0 records out 00:06:34.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347039 s, 11.8 MB/s 00:06:34.070 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.070 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:34.070 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.070 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.070 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:34.070 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:34.070 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:34.070 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:34.328 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:34.328 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:34.328 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:34.328 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:34.328 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:34.328 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.328 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.328 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:34.328 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:34.328 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:34.328 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:34.328 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:34.328 1+0 records in 00:06:34.328 1+0 records out 00:06:34.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360035 s, 11.4 MB/s 00:06:34.328 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.328 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:34.328 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.328 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.328 01:06:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:34.328 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:34.328 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:34.328 01:06:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:34.586 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:34.586 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:34.586 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:34.586 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:34.586 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:34.586 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.586 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.586 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:34.586 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:34.586 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:34.586 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:34.586 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:34.586 1+0 records in 00:06:34.586 1+0 records out 00:06:34.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401328 s, 10.2 MB/s 00:06:34.586 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.586 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:34.586 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.586 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.586 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:34.586 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:34.586 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:34.586 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:34.843 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:34.843 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:34.843 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:34.843 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:34.843 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:34.843 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.843 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.843 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:34.843 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:34.843 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:34.843 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:34.843 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:34.843 1+0 records in 00:06:34.843 1+0 records out 00:06:34.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510535 s, 8.0 MB/s 00:06:34.843 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.843 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:34.843 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.843 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.843 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:34.843 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:34.843 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:34.843 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:35.102 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:35.102 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:35.102 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:35.102 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:35.102 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:35.102 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:35.102 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:35.102 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:35.102 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:35.102 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:35.102 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:35.102 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:35.102 1+0 records in 00:06:35.102 1+0 records out 00:06:35.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422799 s, 9.7 MB/s 00:06:35.102 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.102 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:35.102 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.102 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:35.102 01:06:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:35.102 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:35.102 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:35.102 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.361 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:35.361 { 00:06:35.361 "nbd_device": "/dev/nbd0", 00:06:35.361 "bdev_name": "Nvme0n1" 00:06:35.361 }, 00:06:35.361 { 00:06:35.361 "nbd_device": "/dev/nbd1", 00:06:35.361 "bdev_name": "Nvme1n1" 00:06:35.361 }, 00:06:35.361 { 00:06:35.361 "nbd_device": "/dev/nbd2", 00:06:35.361 "bdev_name": "Nvme2n1" 00:06:35.361 }, 00:06:35.361 { 00:06:35.361 "nbd_device": "/dev/nbd3", 00:06:35.361 "bdev_name": "Nvme2n2" 00:06:35.361 }, 00:06:35.361 { 00:06:35.361 "nbd_device": "/dev/nbd4", 00:06:35.361 "bdev_name": "Nvme2n3" 00:06:35.361 }, 00:06:35.361 { 00:06:35.361 "nbd_device": "/dev/nbd5", 00:06:35.361 "bdev_name": "Nvme3n1" 00:06:35.361 } 00:06:35.361 ]' 00:06:35.361 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:35.361 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:35.361 { 00:06:35.361 "nbd_device": "/dev/nbd0", 00:06:35.361 "bdev_name": "Nvme0n1" 00:06:35.361 }, 00:06:35.361 { 00:06:35.361 "nbd_device": "/dev/nbd1", 00:06:35.361 "bdev_name": "Nvme1n1" 00:06:35.361 }, 00:06:35.361 { 00:06:35.361 "nbd_device": "/dev/nbd2", 00:06:35.361 "bdev_name": "Nvme2n1" 00:06:35.361 }, 00:06:35.361 { 00:06:35.361 "nbd_device": "/dev/nbd3", 00:06:35.361 "bdev_name": "Nvme2n2" 00:06:35.361 }, 00:06:35.361 { 00:06:35.361 "nbd_device": "/dev/nbd4", 00:06:35.361 "bdev_name": "Nvme2n3" 00:06:35.361 }, 00:06:35.361 { 00:06:35.361 "nbd_device": "/dev/nbd5", 00:06:35.361 "bdev_name": "Nvme3n1" 00:06:35.361 } 00:06:35.361 ]' 00:06:35.361 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:35.361 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:35.361 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.361 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:35.361 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:35.361 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:35.361 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.361 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:35.361 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:35.361 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:35.361 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:35.361 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.361 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.361 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:35.361 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.361 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.361 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.361 01:06:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:35.623 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:35.623 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:35.623 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:35.623 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.623 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.623 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:35.623 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.623 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.623 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.623 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:35.883 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:35.883 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:35.883 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:35.883 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.883 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.883 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:35.883 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.884 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.884 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.884 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:36.144 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:36.144 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:36.144 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:36.144 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.144 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.144 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:36.144 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:36.144 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.144 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.144 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:36.403 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:36.403 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:36.403 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:36.403 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.403 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.403 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:36.403 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:36.403 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.403 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.403 01:06:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:36.660 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:36.660 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:36.661 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:36.661 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.661 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.661 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:36.661 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:36.661 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.661 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.661 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.661 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.661 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:36.661 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.661 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:36.918 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:36.918 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.918 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:36.918 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:36.918 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:36.918 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:36.918 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:36.918 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:36.918 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:36.918 01:06:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:36.918 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.918 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:36.918 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:36.918 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:36.918 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:36.918 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:36.919 /dev/nbd0 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.919 1+0 records in 00:06:36.919 1+0 records out 00:06:36.919 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320425 s, 12.8 MB/s 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:36.919 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:37.177 /dev/nbd1 00:06:37.177 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.177 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.177 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:37.177 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:37.177 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.177 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.177 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:37.177 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:37.177 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.177 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.177 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.177 1+0 records in 00:06:37.177 1+0 records out 00:06:37.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277322 s, 14.8 MB/s 00:06:37.177 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.177 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:37.177 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.177 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.177 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:37.177 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.177 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:37.177 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:37.435 /dev/nbd10 00:06:37.435 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:37.435 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:37.435 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:37.435 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:37.435 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.435 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.435 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:37.436 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:37.436 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.436 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.436 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.436 1+0 records in 00:06:37.436 1+0 records out 00:06:37.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473852 s, 8.6 MB/s 00:06:37.436 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.436 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:37.436 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.436 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.436 01:06:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:37.436 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.436 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:37.436 01:06:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:37.693 /dev/nbd11 00:06:37.693 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:37.693 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:37.693 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:37.693 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:37.693 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.693 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.693 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:37.693 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:37.693 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.693 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.693 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.693 1+0 records in 00:06:37.693 1+0 records out 00:06:37.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394379 s, 10.4 MB/s 00:06:37.693 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.694 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:37.694 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.694 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.694 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:37.694 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.694 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:37.694 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:37.951 /dev/nbd12 00:06:37.951 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:37.951 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:37.951 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:37.951 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:37.951 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.951 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.951 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:37.951 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:37.951 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.951 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.951 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.951 1+0 records in 00:06:37.951 1+0 records out 00:06:37.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374014 s, 11.0 MB/s 00:06:37.951 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.951 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:37.951 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.951 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.951 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:37.951 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.951 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:37.951 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:38.209 /dev/nbd13 00:06:38.210 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:38.210 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:38.210 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:38.210 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:38.210 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:38.210 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:38.210 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:38.210 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:38.210 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:38.210 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:38.210 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:38.210 1+0 records in 00:06:38.210 1+0 records out 00:06:38.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440464 s, 9.3 MB/s 00:06:38.210 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.210 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:38.210 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.210 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:38.210 01:06:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:38.210 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.210 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:38.210 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.210 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.210 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.577 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:38.577 { 00:06:38.577 "nbd_device": "/dev/nbd0", 00:06:38.578 "bdev_name": "Nvme0n1" 00:06:38.578 }, 00:06:38.578 { 00:06:38.578 "nbd_device": "/dev/nbd1", 00:06:38.578 "bdev_name": "Nvme1n1" 00:06:38.578 }, 00:06:38.578 { 00:06:38.578 "nbd_device": "/dev/nbd10", 00:06:38.578 "bdev_name": "Nvme2n1" 00:06:38.578 }, 00:06:38.578 { 00:06:38.578 "nbd_device": "/dev/nbd11", 00:06:38.578 "bdev_name": "Nvme2n2" 00:06:38.578 }, 00:06:38.578 { 00:06:38.578 "nbd_device": "/dev/nbd12", 00:06:38.578 "bdev_name": "Nvme2n3" 00:06:38.578 }, 00:06:38.578 { 00:06:38.578 "nbd_device": "/dev/nbd13", 00:06:38.578 "bdev_name": "Nvme3n1" 00:06:38.578 } 00:06:38.578 ]' 00:06:38.578 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:38.578 { 00:06:38.578 "nbd_device": "/dev/nbd0", 00:06:38.578 "bdev_name": "Nvme0n1" 00:06:38.578 }, 00:06:38.578 { 00:06:38.578 "nbd_device": "/dev/nbd1", 00:06:38.578 "bdev_name": "Nvme1n1" 00:06:38.578 }, 00:06:38.578 { 00:06:38.578 "nbd_device": "/dev/nbd10", 00:06:38.578 "bdev_name": "Nvme2n1" 00:06:38.578 }, 00:06:38.578 { 00:06:38.578 "nbd_device": "/dev/nbd11", 00:06:38.578 "bdev_name": "Nvme2n2" 00:06:38.578 }, 00:06:38.578 { 00:06:38.578 "nbd_device": "/dev/nbd12", 00:06:38.578 "bdev_name": "Nvme2n3" 00:06:38.578 }, 00:06:38.578 { 00:06:38.578 "nbd_device": "/dev/nbd13", 00:06:38.578 "bdev_name": "Nvme3n1" 00:06:38.578 } 00:06:38.578 ]' 00:06:38.578 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.578 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:38.578 /dev/nbd1 00:06:38.578 /dev/nbd10 00:06:38.578 /dev/nbd11 00:06:38.578 /dev/nbd12 00:06:38.578 /dev/nbd13' 00:06:38.578 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:38.578 /dev/nbd1 00:06:38.578 /dev/nbd10 00:06:38.578 /dev/nbd11 00:06:38.578 /dev/nbd12 00:06:38.578 /dev/nbd13' 00:06:38.578 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.578 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:38.578 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:38.578 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:38.578 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:38.578 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:38.578 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:38.578 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.578 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:38.578 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:38.578 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:38.578 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:38.578 256+0 records in 00:06:38.578 256+0 records out 00:06:38.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00613954 s, 171 MB/s 00:06:38.578 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.578 01:06:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:38.578 256+0 records in 00:06:38.578 256+0 records out 00:06:38.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0580741 s, 18.1 MB/s 00:06:38.578 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.578 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:38.578 256+0 records in 00:06:38.578 256+0 records out 00:06:38.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0615863 s, 17.0 MB/s 00:06:38.578 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.578 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:38.578 256+0 records in 00:06:38.578 256+0 records out 00:06:38.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0639334 s, 16.4 MB/s 00:06:38.578 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.578 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:38.837 256+0 records in 00:06:38.837 256+0 records out 00:06:38.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0605825 s, 17.3 MB/s 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:38.837 256+0 records in 00:06:38.837 256+0 records out 00:06:38.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0595199 s, 17.6 MB/s 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:38.837 256+0 records in 00:06:38.837 256+0 records out 00:06:38.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0625876 s, 16.8 MB/s 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:38.837 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.838 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:38.838 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:38.838 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:38.838 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.838 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:39.095 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:39.095 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:39.095 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:39.095 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.095 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.095 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:39.095 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.095 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.096 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.096 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.353 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.353 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.353 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.353 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.353 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.353 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.353 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.353 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.353 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.353 01:06:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:39.612 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:39.612 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:39.612 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:39.612 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.612 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.612 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:39.612 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.612 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.612 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.612 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:39.612 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:39.612 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:39.612 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:39.612 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.612 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.612 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:39.612 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.612 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.612 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.612 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:39.933 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:39.933 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:39.933 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:39.933 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.933 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.933 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:39.933 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.933 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.933 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.933 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:40.219 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:40.219 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:40.219 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:40.219 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.219 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.219 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:40.219 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.219 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.219 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.219 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.219 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.478 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.478 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.478 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.478 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.479 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.479 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.479 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:40.479 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.479 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.479 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:40.479 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:40.479 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:40.479 01:06:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:40.479 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.479 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:40.479 01:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:40.479 malloc_lvol_verify 00:06:40.479 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:40.737 01830d6d-7a52-44b0-9b13-ce9519edb823 00:06:40.737 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:40.995 2ff6208e-3c72-424b-b6fe-bfa971d4f8d7 00:06:40.995 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:41.254 /dev/nbd0 00:06:41.254 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:41.254 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:41.254 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:41.254 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:41.254 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:41.254 mke2fs 1.47.0 (5-Feb-2023) 00:06:41.254 Discarding device blocks: 0/4096 done 00:06:41.254 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:41.254 00:06:41.254 Allocating group tables: 0/1 done 00:06:41.254 Writing inode tables: 0/1 done 00:06:41.254 Creating journal (1024 blocks): done 00:06:41.254 Writing superblocks and filesystem accounting information: 0/1 done 00:06:41.254 00:06:41.254 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:41.254 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.254 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:41.254 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:41.254 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:41.254 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.254 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:41.513 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:41.513 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:41.513 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:41.513 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.513 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.513 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:41.513 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:41.513 01:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.513 01:06:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73354 00:06:41.513 01:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73354 ']' 00:06:41.513 01:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73354 00:06:41.513 01:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:41.513 01:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.513 01:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73354 00:06:41.513 01:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.513 01:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.513 killing process with pid 73354 00:06:41.513 01:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73354' 00:06:41.513 01:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73354 00:06:41.513 01:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73354 00:06:41.513 01:06:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:41.513 00:06:41.513 real 0m8.870s 00:06:41.513 user 0m13.214s 00:06:41.513 sys 0m2.907s 00:06:41.513 01:06:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.513 01:06:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:41.513 ************************************ 00:06:41.513 END TEST bdev_nbd 00:06:41.513 ************************************ 00:06:41.514 01:06:15 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:41.514 01:06:15 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:41.514 skipping fio tests on NVMe due to multi-ns failures. 00:06:41.514 01:06:15 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:41.514 01:06:15 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:41.514 01:06:15 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:41.514 01:06:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:41.514 01:06:15 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.514 01:06:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:41.514 ************************************ 00:06:41.514 START TEST bdev_verify 00:06:41.514 ************************************ 00:06:41.514 01:06:15 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:41.771 [2024-12-14 01:06:15.173037] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:41.771 [2024-12-14 01:06:15.173145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73718 ] 00:06:41.771 [2024-12-14 01:06:15.318179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.771 [2024-12-14 01:06:15.338502] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.771 [2024-12-14 01:06:15.338567] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.337 Running I/O for 5 seconds... 00:06:44.647 23296.00 IOPS, 91.00 MiB/s [2024-12-14T01:06:19.195Z] 23808.00 IOPS, 93.00 MiB/s [2024-12-14T01:06:20.129Z] 24320.00 IOPS, 95.00 MiB/s [2024-12-14T01:06:21.065Z] 25104.00 IOPS, 98.06 MiB/s [2024-12-14T01:06:21.065Z] 25216.00 IOPS, 98.50 MiB/s 00:06:47.453 Latency(us) 00:06:47.453 [2024-12-14T01:06:21.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:47.453 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:47.453 Verification LBA range: start 0x0 length 0xbd0bd 00:06:47.453 Nvme0n1 : 5.05 2155.46 8.42 0.00 0.00 59114.57 10183.29 70577.23 00:06:47.453 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:47.453 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:47.453 Nvme0n1 : 5.04 1979.75 7.73 0.00 0.00 64387.27 10838.65 68560.74 00:06:47.453 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:47.453 Verification LBA range: start 0x0 length 0xa0000 00:06:47.453 Nvme1n1 : 5.07 2157.43 8.43 0.00 0.00 58957.54 7763.50 62107.96 00:06:47.453 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:47.453 Verification LBA range: start 0xa0000 length 0xa0000 00:06:47.453 Nvme1n1 : 5.04 1979.21 7.73 0.00 0.00 64297.19 13208.02 61301.37 00:06:47.453 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:47.453 Verification LBA range: start 0x0 length 0x80000 00:06:47.453 Nvme2n1 : 5.08 2156.20 8.42 0.00 0.00 58867.92 9124.63 55251.89 00:06:47.453 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:47.453 Verification LBA range: start 0x80000 length 0x80000 00:06:47.453 Nvme2n1 : 5.06 1984.63 7.75 0.00 0.00 64049.63 7208.96 55251.89 00:06:47.453 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:47.453 Verification LBA range: start 0x0 length 0x80000 00:06:47.453 Nvme2n2 : 5.08 2155.63 8.42 0.00 0.00 58767.40 9326.28 56461.78 00:06:47.453 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:47.453 Verification LBA range: start 0x80000 length 0x80000 00:06:47.453 Nvme2n2 : 5.09 1988.46 7.77 0.00 0.00 63880.76 13611.32 54445.29 00:06:47.453 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:47.453 Verification LBA range: start 0x0 length 0x80000 00:06:47.453 Nvme2n3 : 5.09 2163.86 8.45 0.00 0.00 58564.83 7813.91 57671.68 00:06:47.453 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:47.453 Verification LBA range: start 0x80000 length 0x80000 00:06:47.453 Nvme2n3 : 5.09 1987.25 7.76 0.00 0.00 63800.24 12149.37 56461.78 00:06:47.453 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:47.453 Verification LBA range: start 0x0 length 0x20000 00:06:47.453 Nvme3n1 : 5.09 2162.83 8.45 0.00 0.00 58478.87 9175.04 58881.58 00:06:47.453 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:47.453 Verification LBA range: start 0x20000 length 0x20000 00:06:47.453 Nvme3n1 : 5.09 1986.31 7.76 0.00 0.00 63708.25 10939.47 57671.68 00:06:47.453 [2024-12-14T01:06:21.065Z] =================================================================================================================== 00:06:47.453 [2024-12-14T01:06:21.065Z] Total : 24857.00 97.10 0.00 0.00 61293.62 7208.96 70577.23 00:06:48.834 00:06:48.834 real 0m7.024s 00:06:48.834 user 0m13.397s 00:06:48.834 sys 0m0.197s 00:06:48.834 01:06:22 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.834 ************************************ 00:06:48.834 END TEST bdev_verify 00:06:48.834 ************************************ 00:06:48.834 01:06:22 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:48.834 01:06:22 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:48.834 01:06:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:48.834 01:06:22 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.834 01:06:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:48.834 ************************************ 00:06:48.834 START TEST bdev_verify_big_io 00:06:48.834 ************************************ 00:06:48.834 01:06:22 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:48.834 [2024-12-14 01:06:22.235667] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:48.834 [2024-12-14 01:06:22.235882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73810 ] 00:06:48.834 [2024-12-14 01:06:22.382363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.834 [2024-12-14 01:06:22.402881] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.834 [2024-12-14 01:06:22.402946] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.401 Running I/O for 5 seconds... 00:06:53.315 16.00 IOPS, 1.00 MiB/s [2024-12-14T01:06:28.855Z] 1179.00 IOPS, 73.69 MiB/s [2024-12-14T01:06:29.113Z] 2061.33 IOPS, 128.83 MiB/s 00:06:55.501 Latency(us) 00:06:55.501 [2024-12-14T01:06:29.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:55.501 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:55.501 Verification LBA range: start 0x0 length 0xbd0b 00:06:55.501 Nvme0n1 : 5.60 125.66 7.85 0.00 0.00 977606.43 18047.61 1193763.45 00:06:55.501 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:55.501 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:55.501 Nvme0n1 : 5.88 119.64 7.48 0.00 0.00 1005639.43 12098.95 1187310.67 00:06:55.502 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:55.502 Verification LBA range: start 0x0 length 0xa000 00:06:55.502 Nvme1n1 : 5.76 129.48 8.09 0.00 0.00 918506.30 71383.83 987274.63 00:06:55.502 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:55.502 Verification LBA range: start 0xa000 length 0xa000 00:06:55.502 Nvme1n1 : 5.97 124.89 7.81 0.00 0.00 950155.21 84692.68 987274.63 00:06:55.502 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:55.502 Verification LBA range: start 0x0 length 0x8000 00:06:55.502 Nvme2n1 : 5.76 133.26 8.33 0.00 0.00 869342.65 85499.27 1019538.51 00:06:55.502 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:55.502 Verification LBA range: start 0x8000 length 0x8000 00:06:55.502 Nvme2n1 : 5.97 125.03 7.81 0.00 0.00 914802.55 83482.78 909841.33 00:06:55.502 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:55.502 Verification LBA range: start 0x0 length 0x8000 00:06:55.502 Nvme2n2 : 5.83 135.60 8.48 0.00 0.00 820696.82 67754.14 1038896.84 00:06:55.502 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:55.502 Verification LBA range: start 0x8000 length 0x8000 00:06:55.502 Nvme2n2 : 5.97 128.67 8.04 0.00 0.00 865385.81 79449.80 929199.66 00:06:55.502 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:55.502 Verification LBA range: start 0x0 length 0x8000 00:06:55.502 Nvme2n3 : 5.93 151.00 9.44 0.00 0.00 717463.80 35086.97 1064707.94 00:06:55.502 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:55.502 Verification LBA range: start 0x8000 length 0x8000 00:06:55.502 Nvme2n3 : 6.02 136.69 8.54 0.00 0.00 790266.89 11544.42 1109877.37 00:06:55.502 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:55.502 Verification LBA range: start 0x0 length 0x2000 00:06:55.502 Nvme3n1 : 6.01 174.36 10.90 0.00 0.00 601176.01 636.46 1084066.26 00:06:55.502 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:55.502 Verification LBA range: start 0x2000 length 0x2000 00:06:55.502 Nvme3n1 : 6.04 145.21 9.08 0.00 0.00 717223.32 1291.82 1832588.21 00:06:55.502 [2024-12-14T01:06:29.114Z] =================================================================================================================== 00:06:55.502 [2024-12-14T01:06:29.114Z] Total : 1629.49 101.84 0.00 0.00 832512.09 636.46 1832588.21 00:06:56.878 00:06:56.878 real 0m8.099s 00:06:56.878 user 0m15.546s 00:06:56.878 sys 0m0.198s 00:06:56.878 01:06:30 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.878 ************************************ 00:06:56.878 END TEST bdev_verify_big_io 00:06:56.878 ************************************ 00:06:56.878 01:06:30 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:56.878 01:06:30 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:56.878 01:06:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:56.878 01:06:30 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.878 01:06:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.878 ************************************ 00:06:56.878 START TEST bdev_write_zeroes 00:06:56.878 ************************************ 00:06:56.878 01:06:30 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:56.878 [2024-12-14 01:06:30.379645] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:56.878 [2024-12-14 01:06:30.379761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73908 ] 00:06:57.137 [2024-12-14 01:06:30.525766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.137 [2024-12-14 01:06:30.545538] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.395 Running I/O for 1 seconds... 00:06:58.768 76416.00 IOPS, 298.50 MiB/s 00:06:58.769 Latency(us) 00:06:58.769 [2024-12-14T01:06:32.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:58.769 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:58.769 Nvme0n1 : 1.02 12661.03 49.46 0.00 0.00 10078.52 6755.25 21374.82 00:06:58.769 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:58.769 Nvme1n1 : 1.02 12631.32 49.34 0.00 0.00 10084.61 7208.96 21576.47 00:06:58.769 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:58.769 Nvme2n1 : 1.03 12602.74 49.23 0.00 0.00 10084.12 7158.55 20265.75 00:06:58.769 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:58.769 Nvme2n2 : 1.03 12574.02 49.12 0.00 0.00 10071.53 7108.14 19862.45 00:06:58.769 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:58.769 Nvme2n3 : 1.03 12560.03 49.06 0.00 0.00 10037.81 6956.90 18854.20 00:06:58.769 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:58.769 Nvme3n1 : 1.03 12549.58 49.02 0.00 0.00 10014.46 6856.07 19358.33 00:06:58.769 [2024-12-14T01:06:32.381Z] =================================================================================================================== 00:06:58.769 [2024-12-14T01:06:32.381Z] Total : 75578.71 295.23 0.00 0.00 10061.84 6755.25 21576.47 00:06:58.769 00:06:58.769 real 0m1.793s 00:06:58.769 user 0m1.525s 00:06:58.769 sys 0m0.160s 00:06:58.769 01:06:32 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.769 01:06:32 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:58.769 ************************************ 00:06:58.769 END TEST bdev_write_zeroes 00:06:58.769 ************************************ 00:06:58.769 01:06:32 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:58.769 01:06:32 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:58.769 01:06:32 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.769 01:06:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.769 ************************************ 00:06:58.769 START TEST bdev_json_nonenclosed 00:06:58.769 ************************************ 00:06:58.769 01:06:32 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:58.769 [2024-12-14 01:06:32.227613] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:58.769 [2024-12-14 01:06:32.227744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73939 ] 00:06:58.769 [2024-12-14 01:06:32.368957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.028 [2024-12-14 01:06:32.387030] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.028 [2024-12-14 01:06:32.387107] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:59.028 [2024-12-14 01:06:32.387119] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:59.028 [2024-12-14 01:06:32.387128] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.028 00:06:59.028 real 0m0.273s 00:06:59.028 user 0m0.091s 00:06:59.028 sys 0m0.080s 00:06:59.028 ************************************ 00:06:59.028 END TEST bdev_json_nonenclosed 00:06:59.028 01:06:32 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.028 01:06:32 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:59.028 ************************************ 00:06:59.028 01:06:32 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:59.028 01:06:32 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:59.028 01:06:32 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.028 01:06:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.028 ************************************ 00:06:59.028 START TEST bdev_json_nonarray 00:06:59.028 ************************************ 00:06:59.028 01:06:32 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:59.028 [2024-12-14 01:06:32.540504] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:59.028 [2024-12-14 01:06:32.540610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73970 ] 00:06:59.286 [2024-12-14 01:06:32.681616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.286 [2024-12-14 01:06:32.699572] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.286 [2024-12-14 01:06:32.699660] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:59.286 [2024-12-14 01:06:32.699677] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:59.286 [2024-12-14 01:06:32.699686] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.286 00:06:59.286 real 0m0.267s 00:06:59.286 user 0m0.101s 00:06:59.286 sys 0m0.064s 00:06:59.286 ************************************ 00:06:59.286 END TEST bdev_json_nonarray 00:06:59.286 ************************************ 00:06:59.286 01:06:32 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.286 01:06:32 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:59.286 01:06:32 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:59.286 01:06:32 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:59.286 01:06:32 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:59.286 01:06:32 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:59.286 01:06:32 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:59.287 01:06:32 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:59.287 01:06:32 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:59.287 01:06:32 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:59.287 01:06:32 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:59.287 01:06:32 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:59.287 01:06:32 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:59.287 ************************************ 00:06:59.287 END TEST blockdev_nvme 00:06:59.287 ************************************ 00:06:59.287 00:06:59.287 real 0m30.537s 00:06:59.287 user 0m49.734s 00:06:59.287 sys 0m4.658s 00:06:59.287 01:06:32 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.287 01:06:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.287 01:06:32 -- spdk/autotest.sh@209 -- # uname -s 00:06:59.287 01:06:32 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:59.287 01:06:32 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:59.287 01:06:32 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:59.287 01:06:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.287 01:06:32 -- common/autotest_common.sh@10 -- # set +x 00:06:59.287 ************************************ 00:06:59.287 START TEST blockdev_nvme_gpt 00:06:59.287 ************************************ 00:06:59.287 01:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:59.546 * Looking for test storage... 00:06:59.546 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:59.546 01:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:59.546 01:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:06:59.546 01:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:59.546 01:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.546 01:06:32 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:59.547 01:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.547 01:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:59.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.547 --rc genhtml_branch_coverage=1 00:06:59.547 --rc genhtml_function_coverage=1 00:06:59.547 --rc genhtml_legend=1 00:06:59.547 --rc geninfo_all_blocks=1 00:06:59.547 --rc geninfo_unexecuted_blocks=1 00:06:59.547 00:06:59.547 ' 00:06:59.547 01:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:59.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.547 --rc genhtml_branch_coverage=1 00:06:59.547 --rc genhtml_function_coverage=1 00:06:59.547 --rc genhtml_legend=1 00:06:59.547 --rc geninfo_all_blocks=1 00:06:59.547 --rc geninfo_unexecuted_blocks=1 00:06:59.547 00:06:59.547 ' 00:06:59.547 01:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:59.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.547 --rc genhtml_branch_coverage=1 00:06:59.547 --rc genhtml_function_coverage=1 00:06:59.547 --rc genhtml_legend=1 00:06:59.547 --rc geninfo_all_blocks=1 00:06:59.547 --rc geninfo_unexecuted_blocks=1 00:06:59.547 00:06:59.547 ' 00:06:59.547 01:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:59.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.547 --rc genhtml_branch_coverage=1 00:06:59.547 --rc genhtml_function_coverage=1 00:06:59.547 --rc genhtml_legend=1 00:06:59.547 --rc geninfo_all_blocks=1 00:06:59.547 --rc geninfo_unexecuted_blocks=1 00:06:59.547 00:06:59.547 ' 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74043 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 74043 00:06:59.547 01:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 74043 ']' 00:06:59.547 01:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.547 01:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.547 01:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.547 01:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.547 01:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:59.547 01:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:59.547 [2024-12-14 01:06:33.068412] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:59.547 [2024-12-14 01:06:33.068545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74043 ] 00:06:59.806 [2024-12-14 01:06:33.210176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.806 [2024-12-14 01:06:33.228357] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.374 01:06:33 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.374 01:06:33 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:00.374 01:06:33 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:00.374 01:06:33 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:07:00.374 01:06:33 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:00.632 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:00.890 Waiting for block devices as requested 00:07:00.890 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:00.890 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:00.890 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:00.890 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:06.147 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:06.147 01:06:39 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:06.147 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:06.147 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:06.147 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:06.148 01:06:39 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:06.148 01:06:39 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:06.148 01:06:39 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:06.148 01:06:39 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:06.148 01:06:39 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:06.148 01:06:39 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:06.148 01:06:39 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:06.148 01:06:39 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:06.148 01:06:39 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:06.148 BYT; 00:07:06.148 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:06.148 01:06:39 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:06.148 BYT; 00:07:06.148 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:06.148 01:06:39 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:06.148 01:06:39 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:06.148 01:06:39 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:06.148 01:06:39 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:06.148 01:06:39 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:06.148 01:06:39 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:07.523 01:06:41 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:07.523 01:06:41 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:07.523 01:06:41 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:07.523 01:06:41 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:07.523 01:06:41 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:07.523 01:06:41 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:07.523 01:06:41 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:07.523 01:06:41 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:07.523 01:06:41 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:07.523 01:06:41 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:07.523 01:06:41 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:07.523 01:06:41 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:07.523 01:06:41 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:07.523 01:06:41 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:07.523 01:06:41 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:07.523 01:06:41 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:07.523 01:06:41 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:07.523 01:06:41 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:07.523 01:06:41 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:07.523 01:06:41 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:07.523 01:06:41 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:07.523 01:06:41 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:07.523 01:06:41 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:09.449 The operation has completed successfully. 00:07:09.449 01:06:42 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:10.405 The operation has completed successfully. 00:07:10.405 01:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:10.972 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:11.230 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:11.230 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:11.230 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:11.230 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:11.489 01:06:44 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:11.489 01:06:44 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.489 01:06:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.489 [] 00:07:11.489 01:06:44 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.489 01:06:44 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:11.489 01:06:44 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:11.489 01:06:44 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:11.490 01:06:44 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:11.490 01:06:44 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:11.490 01:06:44 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.490 01:06:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.750 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.750 01:06:45 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:11.750 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.750 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.750 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.750 01:06:45 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:07:11.750 01:06:45 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:11.750 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.750 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.750 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.750 01:06:45 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:11.750 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.750 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.750 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.750 01:06:45 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:11.750 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.750 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.750 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.750 01:06:45 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:11.750 01:06:45 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:11.750 01:06:45 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:11.750 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.750 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.750 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.750 01:06:45 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:11.750 01:06:45 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:11.751 01:06:45 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "584f5656-aef5-4cac-a1a9-b7b7d9cd918e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "584f5656-aef5-4cac-a1a9-b7b7d9cd918e",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "01b98687-c06d-466b-9f44-5f382ea3a87a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "01b98687-c06d-466b-9f44-5f382ea3a87a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "1ee34b17-718c-40d6-9801-0922715868d9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1ee34b17-718c-40d6-9801-0922715868d9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "76d8ef46-60a4-4bd0-a61c-361e98daa2cd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "76d8ef46-60a4-4bd0-a61c-361e98daa2cd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "65858329-fb52-40b3-83ac-4ccf857cc0da"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "65858329-fb52-40b3-83ac-4ccf857cc0da",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:11.751 01:06:45 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:11.751 01:06:45 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:11.751 01:06:45 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:11.751 01:06:45 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 74043 00:07:11.751 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 74043 ']' 00:07:11.751 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 74043 00:07:11.751 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:12.011 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.011 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74043 00:07:12.011 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.011 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.011 killing process with pid 74043 00:07:12.011 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74043' 00:07:12.011 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 74043 00:07:12.011 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 74043 00:07:12.011 01:06:45 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:12.011 01:06:45 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:12.011 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:12.011 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.011 01:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.270 ************************************ 00:07:12.270 START TEST bdev_hello_world 00:07:12.270 ************************************ 00:07:12.270 01:06:45 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:12.270 [2024-12-14 01:06:45.675703] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:12.270 [2024-12-14 01:06:45.675812] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74679 ] 00:07:12.270 [2024-12-14 01:06:45.813708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.270 [2024-12-14 01:06:45.831754] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.837 [2024-12-14 01:06:46.191938] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:12.837 [2024-12-14 01:06:46.191988] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:12.837 [2024-12-14 01:06:46.192009] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:12.837 [2024-12-14 01:06:46.194073] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:12.837 [2024-12-14 01:06:46.195328] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:12.837 [2024-12-14 01:06:46.195375] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:12.837 [2024-12-14 01:06:46.195793] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:12.837 00:07:12.837 [2024-12-14 01:06:46.195831] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:12.837 00:07:12.837 real 0m0.719s 00:07:12.837 user 0m0.483s 00:07:12.837 sys 0m0.132s 00:07:12.837 ************************************ 00:07:12.837 END TEST bdev_hello_world 00:07:12.837 ************************************ 00:07:12.837 01:06:46 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.837 01:06:46 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:12.837 01:06:46 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:12.837 01:06:46 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:12.837 01:06:46 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.837 01:06:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.837 ************************************ 00:07:12.837 START TEST bdev_bounds 00:07:12.837 ************************************ 00:07:12.837 01:06:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:12.837 01:06:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74704 00:07:12.837 01:06:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:12.837 Process bdevio pid: 74704 00:07:12.837 01:06:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74704' 00:07:12.837 01:06:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74704 00:07:12.837 01:06:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:12.837 01:06:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74704 ']' 00:07:12.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.837 01:06:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.837 01:06:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.837 01:06:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.837 01:06:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.837 01:06:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:13.096 [2024-12-14 01:06:46.467937] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:13.096 [2024-12-14 01:06:46.468049] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74704 ] 00:07:13.096 [2024-12-14 01:06:46.611444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.096 [2024-12-14 01:06:46.633222] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.096 [2024-12-14 01:06:46.633841] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.096 [2024-12-14 01:06:46.633878] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.663 01:06:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.663 01:06:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:13.663 01:06:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:13.921 I/O targets: 00:07:13.921 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:13.921 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:13.921 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:13.921 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:13.921 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:13.921 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:13.921 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:13.921 00:07:13.921 00:07:13.921 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.921 http://cunit.sourceforge.net/ 00:07:13.921 00:07:13.921 00:07:13.921 Suite: bdevio tests on: Nvme3n1 00:07:13.921 Test: blockdev write read block ...passed 00:07:13.921 Test: blockdev write zeroes read block ...passed 00:07:13.921 Test: blockdev write zeroes read no split ...passed 00:07:13.921 Test: blockdev write zeroes read split ...passed 00:07:13.921 Test: blockdev write zeroes read split partial ...passed 00:07:13.921 Test: blockdev reset ...[2024-12-14 01:06:47.374071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:13.921 passed 00:07:13.921 Test: blockdev write read 8 blocks ...[2024-12-14 01:06:47.377414] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:13.921 passed 00:07:13.921 Test: blockdev write read size > 128k ...passed 00:07:13.921 Test: blockdev write read invalid size ...passed 00:07:13.921 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.921 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.921 Test: blockdev write read max offset ...passed 00:07:13.921 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.921 Test: blockdev writev readv 8 blocks ...passed 00:07:13.921 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.921 Test: blockdev writev readv block ...passed 00:07:13.921 Test: blockdev writev readv size > 128k ...passed 00:07:13.921 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.921 Test: blockdev comparev and writev ...[2024-12-14 01:06:47.385671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b580e000 len:0x1000 00:07:13.921 [2024-12-14 01:06:47.385720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.921 passed 00:07:13.921 Test: blockdev nvme passthru rw ...passed 00:07:13.921 Test: blockdev nvme passthru vendor specific ...passed 00:07:13.921 Test: blockdev nvme admin passthru ...[2024-12-14 01:06:47.386602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.921 [2024-12-14 01:06:47.386643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.921 passed 00:07:13.921 Test: blockdev copy ...passed 00:07:13.921 Suite: bdevio tests on: Nvme2n3 00:07:13.921 Test: blockdev write read block ...passed 00:07:13.921 Test: blockdev write zeroes read block ...passed 00:07:13.921 Test: blockdev write zeroes read no split ...passed 00:07:13.921 Test: blockdev write zeroes read split ...passed 00:07:13.921 Test: blockdev write zeroes read split partial ...passed 00:07:13.921 Test: blockdev reset ...[2024-12-14 01:06:47.414125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:13.921 passed 00:07:13.921 Test: blockdev write read 8 blocks ...[2024-12-14 01:06:47.416092] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:13.921 passed 00:07:13.921 Test: blockdev write read size > 128k ...passed 00:07:13.921 Test: blockdev write read invalid size ...passed 00:07:13.921 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.921 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.921 Test: blockdev write read max offset ...passed 00:07:13.921 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.921 Test: blockdev writev readv 8 blocks ...passed 00:07:13.921 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.921 Test: blockdev writev readv block ...passed 00:07:13.921 Test: blockdev writev readv size > 128k ...passed 00:07:13.921 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.921 Test: blockdev comparev and writev ...[2024-12-14 01:06:47.430436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5808000 len:0x1000 00:07:13.921 [2024-12-14 01:06:47.430471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.921 passed 00:07:13.921 Test: blockdev nvme passthru rw ...passed 00:07:13.921 Test: blockdev nvme passthru vendor specific ...[2024-12-14 01:06:47.432724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.921 [2024-12-14 01:06:47.432750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.921 passed 00:07:13.921 Test: blockdev nvme admin passthru ...passed 00:07:13.921 Test: blockdev copy ...passed 00:07:13.921 Suite: bdevio tests on: Nvme2n2 00:07:13.921 Test: blockdev write read block ...passed 00:07:13.921 Test: blockdev write zeroes read block ...passed 00:07:13.921 Test: blockdev write zeroes read no split ...passed 00:07:13.921 Test: blockdev write zeroes read split ...passed 00:07:13.921 Test: blockdev write zeroes read split partial ...passed 00:07:13.921 Test: blockdev reset ...[2024-12-14 01:06:47.452896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:13.921 passed 00:07:13.921 Test: blockdev write read 8 blocks ...[2024-12-14 01:06:47.455000] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:13.921 passed 00:07:13.921 Test: blockdev write read size > 128k ...passed 00:07:13.921 Test: blockdev write read invalid size ...passed 00:07:13.921 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.921 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.921 Test: blockdev write read max offset ...passed 00:07:13.921 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.921 Test: blockdev writev readv 8 blocks ...passed 00:07:13.921 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.921 Test: blockdev writev readv block ...passed 00:07:13.922 Test: blockdev writev readv size > 128k ...passed 00:07:13.922 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.922 Test: blockdev comparev and writev ...[2024-12-14 01:06:47.469897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5802000 len:0x1000 00:07:13.922 [2024-12-14 01:06:47.469942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.922 passed 00:07:13.922 Test: blockdev nvme passthru rw ...passed 00:07:13.922 Test: blockdev nvme passthru vendor specific ...[2024-12-14 01:06:47.472488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.922 [2024-12-14 01:06:47.472519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.922 passed 00:07:13.922 Test: blockdev nvme admin passthru ...passed 00:07:13.922 Test: blockdev copy ...passed 00:07:13.922 Suite: bdevio tests on: Nvme2n1 00:07:13.922 Test: blockdev write read block ...passed 00:07:13.922 Test: blockdev write zeroes read block ...passed 00:07:13.922 Test: blockdev write zeroes read no split ...passed 00:07:13.922 Test: blockdev write zeroes read split ...passed 00:07:13.922 Test: blockdev write zeroes read split partial ...passed 00:07:13.922 Test: blockdev reset ...[2024-12-14 01:06:47.492045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:13.922 passed 00:07:13.922 Test: blockdev write read 8 blocks ...[2024-12-14 01:06:47.494931] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:13.922 passed 00:07:13.922 Test: blockdev write read size > 128k ...passed 00:07:13.922 Test: blockdev write read invalid size ...passed 00:07:13.922 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.922 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.922 Test: blockdev write read max offset ...passed 00:07:13.922 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.922 Test: blockdev writev readv 8 blocks ...passed 00:07:13.922 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.922 Test: blockdev writev readv block ...passed 00:07:13.922 Test: blockdev writev readv size > 128k ...passed 00:07:13.922 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.922 Test: blockdev comparev and writev ...[2024-12-14 01:06:47.509717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5c04000 len:0x1000 00:07:13.922 [2024-12-14 01:06:47.509750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.922 passed 00:07:13.922 Test: blockdev nvme passthru rw ...passed 00:07:13.922 Test: blockdev nvme passthru vendor specific ...passed 00:07:13.922 Test: blockdev nvme admin passthru ...[2024-12-14 01:06:47.511704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.922 [2024-12-14 01:06:47.511730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.922 passed 00:07:13.922 Test: blockdev copy ...passed 00:07:13.922 Suite: bdevio tests on: Nvme1n1p2 00:07:13.922 Test: blockdev write read block ...passed 00:07:13.922 Test: blockdev write zeroes read block ...passed 00:07:13.922 Test: blockdev write zeroes read no split ...passed 00:07:13.922 Test: blockdev write zeroes read split ...passed 00:07:14.180 Test: blockdev write zeroes read split partial ...passed 00:07:14.180 Test: blockdev reset ...[2024-12-14 01:06:47.531187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:14.180 passed 00:07:14.180 Test: blockdev write read 8 blocks ...[2024-12-14 01:06:47.532730] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:14.180 passed 00:07:14.180 Test: blockdev write read size > 128k ...passed 00:07:14.180 Test: blockdev write read invalid size ...passed 00:07:14.180 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:14.180 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:14.180 Test: blockdev write read max offset ...passed 00:07:14.180 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:14.180 Test: blockdev writev readv 8 blocks ...passed 00:07:14.180 Test: blockdev writev readv 30 x 1block ...passed 00:07:14.180 Test: blockdev writev readv block ...passed 00:07:14.180 Test: blockdev writev readv size > 128k ...passed 00:07:14.180 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:14.180 Test: blockdev comparev and writev ...[2024-12-14 01:06:47.548223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d2c3d000 len:0x1000 00:07:14.180 [2024-12-14 01:06:47.548257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:14.180 passed 00:07:14.180 Test: blockdev nvme passthru rw ...passed 00:07:14.180 Test: blockdev nvme passthru vendor specific ...passed 00:07:14.180 Test: blockdev nvme admin passthru ...passed 00:07:14.180 Test: blockdev copy ...passed 00:07:14.180 Suite: bdevio tests on: Nvme1n1p1 00:07:14.180 Test: blockdev write read block ...passed 00:07:14.180 Test: blockdev write zeroes read block ...passed 00:07:14.180 Test: blockdev write zeroes read no split ...passed 00:07:14.180 Test: blockdev write zeroes read split ...passed 00:07:14.180 Test: blockdev write zeroes read split partial ...passed 00:07:14.180 Test: blockdev reset ...[2024-12-14 01:06:47.567078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:14.180 passed 00:07:14.180 Test: blockdev write read 8 blocks ...[2024-12-14 01:06:47.568538] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:14.180 passed 00:07:14.180 Test: blockdev write read size > 128k ...passed 00:07:14.180 Test: blockdev write read invalid size ...passed 00:07:14.180 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:14.180 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:14.180 Test: blockdev write read max offset ...passed 00:07:14.180 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:14.180 Test: blockdev writev readv 8 blocks ...passed 00:07:14.180 Test: blockdev writev readv 30 x 1block ...passed 00:07:14.180 Test: blockdev writev readv block ...passed 00:07:14.180 Test: blockdev writev readv size > 128k ...passed 00:07:14.180 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:14.180 Test: blockdev comparev and writev ...[2024-12-14 01:06:47.576637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2d2c39000 len:0x1000 00:07:14.180 [2024-12-14 01:06:47.576698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:14.180 passed 00:07:14.180 Test: blockdev nvme passthru rw ...passed 00:07:14.180 Test: blockdev nvme passthru vendor specific ...passed 00:07:14.180 Test: blockdev nvme admin passthru ...passed 00:07:14.180 Test: blockdev copy ...passed 00:07:14.180 Suite: bdevio tests on: Nvme0n1 00:07:14.180 Test: blockdev write read block ...passed 00:07:14.180 Test: blockdev write zeroes read block ...passed 00:07:14.180 Test: blockdev write zeroes read no split ...passed 00:07:14.180 Test: blockdev write zeroes read split ...passed 00:07:14.180 Test: blockdev write zeroes read split partial ...passed 00:07:14.180 Test: blockdev reset ...[2024-12-14 01:06:47.595188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:14.180 [2024-12-14 01:06:47.596709] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:14.180 passed 00:07:14.180 Test: blockdev write read 8 blocks ...passed 00:07:14.180 Test: blockdev write read size > 128k ...passed 00:07:14.180 Test: blockdev write read invalid size ...passed 00:07:14.180 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:14.180 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:14.180 Test: blockdev write read max offset ...passed 00:07:14.180 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:14.180 Test: blockdev writev readv 8 blocks ...passed 00:07:14.180 Test: blockdev writev readv 30 x 1block ...passed 00:07:14.180 Test: blockdev writev readv block ...passed 00:07:14.180 Test: blockdev writev readv size > 128k ...passed 00:07:14.180 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:14.180 Test: blockdev comparev and writev ...passed 00:07:14.180 Test: blockdev nvme passthru rw ...[2024-12-14 01:06:47.609683] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:14.180 separate metadata which is not supported yet. 00:07:14.180 passed 00:07:14.180 Test: blockdev nvme passthru vendor specific ...[2024-12-14 01:06:47.611234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:14.180 [2024-12-14 01:06:47.611274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:14.180 passed 00:07:14.180 Test: blockdev nvme admin passthru ...passed 00:07:14.180 Test: blockdev copy ...passed 00:07:14.180 00:07:14.180 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.180 suites 7 7 n/a 0 0 00:07:14.180 tests 161 161 161 0 0 00:07:14.180 asserts 1025 1025 1025 0 n/a 00:07:14.180 00:07:14.180 Elapsed time = 0.579 seconds 00:07:14.180 0 00:07:14.180 01:06:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74704 00:07:14.180 01:06:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74704 ']' 00:07:14.180 01:06:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74704 00:07:14.180 01:06:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:14.180 01:06:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.181 01:06:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74704 00:07:14.181 01:06:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.181 01:06:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.181 killing process with pid 74704 00:07:14.181 01:06:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74704' 00:07:14.181 01:06:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74704 00:07:14.181 01:06:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74704 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:14.439 00:07:14.439 real 0m1.389s 00:07:14.439 user 0m3.501s 00:07:14.439 sys 0m0.248s 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:14.439 ************************************ 00:07:14.439 END TEST bdev_bounds 00:07:14.439 ************************************ 00:07:14.439 01:06:47 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:14.439 01:06:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:14.439 01:06:47 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.439 01:06:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:14.439 ************************************ 00:07:14.439 START TEST bdev_nbd 00:07:14.439 ************************************ 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74758 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74758 /var/tmp/spdk-nbd.sock 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74758 ']' 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:14.439 01:06:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:14.439 [2024-12-14 01:06:47.923956] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:14.439 [2024-12-14 01:06:47.924069] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.696 [2024-12-14 01:06:48.060970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.696 [2024-12-14 01:06:48.079763] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.261 01:06:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.261 01:06:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:15.261 01:06:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:15.261 01:06:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.261 01:06:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:15.261 01:06:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:15.261 01:06:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:15.261 01:06:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.261 01:06:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:15.261 01:06:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:15.261 01:06:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:15.261 01:06:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:15.261 01:06:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:15.261 01:06:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:15.261 01:06:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:15.520 01:06:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:15.520 01:06:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:15.520 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:15.520 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:15.520 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.520 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.520 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.520 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:15.520 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.520 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.520 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.520 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.520 1+0 records in 00:07:15.520 1+0 records out 00:07:15.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100289 s, 4.1 MB/s 00:07:15.520 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.520 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.520 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.520 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.520 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.520 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.520 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:15.520 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:15.778 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:15.778 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:15.778 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:15.778 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:15.778 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.778 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.778 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.778 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:15.778 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.778 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.779 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.779 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.779 1+0 records in 00:07:15.779 1+0 records out 00:07:15.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000906668 s, 4.5 MB/s 00:07:15.779 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.779 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.779 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.779 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.779 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.779 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.779 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:15.779 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:16.037 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:16.037 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:16.037 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:16.037 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:16.037 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.037 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.037 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.037 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:16.037 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.037 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.037 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.037 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.037 1+0 records in 00:07:16.037 1+0 records out 00:07:16.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00141932 s, 2.9 MB/s 00:07:16.037 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.037 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.037 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.037 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.037 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.037 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:16.037 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:16.037 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:16.295 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:16.295 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:16.295 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:16.295 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:16.295 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.295 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.295 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.295 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:16.295 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.295 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.295 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.295 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.295 1+0 records in 00:07:16.295 1+0 records out 00:07:16.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000866915 s, 4.7 MB/s 00:07:16.295 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.295 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.295 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.295 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.295 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.295 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:16.295 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:16.295 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:16.554 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:16.554 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:16.554 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:16.554 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:16.554 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.554 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.554 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.554 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:16.554 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.554 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.554 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.554 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.554 1+0 records in 00:07:16.554 1+0 records out 00:07:16.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129391 s, 3.2 MB/s 00:07:16.554 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.554 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.554 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.554 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.554 01:06:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.554 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:16.554 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:16.554 01:06:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:16.813 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:16.813 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:16.813 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:16.813 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:16.813 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.813 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.813 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.813 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:16.813 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.813 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.813 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.813 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.813 1+0 records in 00:07:16.813 1+0 records out 00:07:16.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551 s, 7.4 MB/s 00:07:16.813 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.813 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.813 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.813 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.813 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.813 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:16.813 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:16.813 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.071 1+0 records in 00:07:17.071 1+0 records out 00:07:17.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393143 s, 10.4 MB/s 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:17.071 { 00:07:17.071 "nbd_device": "/dev/nbd0", 00:07:17.071 "bdev_name": "Nvme0n1" 00:07:17.071 }, 00:07:17.071 { 00:07:17.071 "nbd_device": "/dev/nbd1", 00:07:17.071 "bdev_name": "Nvme1n1p1" 00:07:17.071 }, 00:07:17.071 { 00:07:17.071 "nbd_device": "/dev/nbd2", 00:07:17.071 "bdev_name": "Nvme1n1p2" 00:07:17.071 }, 00:07:17.071 { 00:07:17.071 "nbd_device": "/dev/nbd3", 00:07:17.071 "bdev_name": "Nvme2n1" 00:07:17.071 }, 00:07:17.071 { 00:07:17.071 "nbd_device": "/dev/nbd4", 00:07:17.071 "bdev_name": "Nvme2n2" 00:07:17.071 }, 00:07:17.071 { 00:07:17.071 "nbd_device": "/dev/nbd5", 00:07:17.071 "bdev_name": "Nvme2n3" 00:07:17.071 }, 00:07:17.071 { 00:07:17.071 "nbd_device": "/dev/nbd6", 00:07:17.071 "bdev_name": "Nvme3n1" 00:07:17.071 } 00:07:17.071 ]' 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:17.071 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:17.071 { 00:07:17.071 "nbd_device": "/dev/nbd0", 00:07:17.071 "bdev_name": "Nvme0n1" 00:07:17.071 }, 00:07:17.071 { 00:07:17.071 "nbd_device": "/dev/nbd1", 00:07:17.071 "bdev_name": "Nvme1n1p1" 00:07:17.071 }, 00:07:17.071 { 00:07:17.071 "nbd_device": "/dev/nbd2", 00:07:17.071 "bdev_name": "Nvme1n1p2" 00:07:17.071 }, 00:07:17.071 { 00:07:17.071 "nbd_device": "/dev/nbd3", 00:07:17.071 "bdev_name": "Nvme2n1" 00:07:17.071 }, 00:07:17.071 { 00:07:17.071 "nbd_device": "/dev/nbd4", 00:07:17.071 "bdev_name": "Nvme2n2" 00:07:17.071 }, 00:07:17.071 { 00:07:17.071 "nbd_device": "/dev/nbd5", 00:07:17.071 "bdev_name": "Nvme2n3" 00:07:17.071 }, 00:07:17.071 { 00:07:17.071 "nbd_device": "/dev/nbd6", 00:07:17.071 "bdev_name": "Nvme3n1" 00:07:17.071 } 00:07:17.071 ]' 00:07:17.329 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:17.329 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.329 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:17.329 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:17.329 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:17.329 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.329 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:17.329 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:17.329 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:17.329 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:17.329 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.329 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.329 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:17.329 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.329 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.329 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.329 01:06:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:17.630 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:17.630 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:17.630 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:17.630 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.630 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.630 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:17.630 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.630 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.630 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.630 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:17.910 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:17.910 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:17.910 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:17.910 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.910 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.910 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:17.910 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.910 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.910 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.910 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:17.910 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:17.910 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:17.910 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:17.910 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.910 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.910 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:17.910 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.910 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.910 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.910 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:18.168 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:18.168 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:18.168 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:18.168 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.168 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.168 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:18.168 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.168 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.168 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.169 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:18.427 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:18.427 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:18.427 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:18.427 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.427 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.427 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:18.427 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.427 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.427 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.427 01:06:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:18.685 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:18.685 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:18.685 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:18.685 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.685 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.685 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:18.686 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.686 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.686 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.686 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.686 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.944 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:18.944 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:18.944 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.944 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:18.944 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:18.944 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.944 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:18.944 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:18.944 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:18.944 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:18.944 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:18.944 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:18.944 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:18.944 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.944 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:18.944 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:18.944 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:18.944 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:18.944 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:18.944 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.945 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:18.945 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:18.945 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:18.945 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:18.945 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:18.945 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:18.945 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:18.945 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:19.203 /dev/nbd0 00:07:19.203 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:19.203 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:19.203 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:19.203 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.203 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.203 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.203 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:19.203 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.203 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.203 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.203 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.203 1+0 records in 00:07:19.203 1+0 records out 00:07:19.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574194 s, 7.1 MB/s 00:07:19.203 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.203 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.203 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.203 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.203 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.203 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.203 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:19.203 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:19.462 /dev/nbd1 00:07:19.462 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:19.462 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:19.462 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:19.462 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.462 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.462 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.462 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:19.462 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.462 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.462 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.462 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.462 1+0 records in 00:07:19.462 1+0 records out 00:07:19.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027291 s, 15.0 MB/s 00:07:19.462 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.462 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.462 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.462 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.462 01:06:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.462 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.462 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:19.462 01:06:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:19.462 /dev/nbd10 00:07:19.721 01:06:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:19.721 01:06:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:19.721 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:19.721 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.721 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.721 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.721 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:19.721 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.721 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.721 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.721 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.721 1+0 records in 00:07:19.721 1+0 records out 00:07:19.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405031 s, 10.1 MB/s 00:07:19.721 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.721 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.721 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.721 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.721 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.721 01:06:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.721 01:06:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:19.721 01:06:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:19.721 /dev/nbd11 00:07:19.721 01:06:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:19.721 01:06:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:19.721 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:19.722 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.722 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.722 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.722 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:19.722 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.722 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.722 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.722 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.722 1+0 records in 00:07:19.722 1+0 records out 00:07:19.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287041 s, 14.3 MB/s 00:07:19.722 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.722 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.722 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.722 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.722 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.722 01:06:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.722 01:06:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:19.722 01:06:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:19.980 /dev/nbd12 00:07:19.980 01:06:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:19.980 01:06:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:19.980 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:19.980 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.980 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.980 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.980 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:19.980 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.980 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.980 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.980 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.980 1+0 records in 00:07:19.980 1+0 records out 00:07:19.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382248 s, 10.7 MB/s 00:07:19.980 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.980 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.980 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.980 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.980 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.980 01:06:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.980 01:06:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:19.980 01:06:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:20.238 /dev/nbd13 00:07:20.238 01:06:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:20.238 01:06:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:20.238 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:20.238 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:20.238 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:20.238 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:20.238 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:20.238 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:20.238 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:20.238 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:20.238 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.238 1+0 records in 00:07:20.238 1+0 records out 00:07:20.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254451 s, 16.1 MB/s 00:07:20.238 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.238 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:20.238 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.238 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:20.238 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:20.238 01:06:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.238 01:06:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:20.238 01:06:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:20.497 /dev/nbd14 00:07:20.497 01:06:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:20.497 01:06:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:20.497 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:20.497 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:20.497 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:20.497 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:20.497 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:20.497 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:20.497 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:20.497 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:20.497 01:06:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.497 1+0 records in 00:07:20.497 1+0 records out 00:07:20.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495719 s, 8.3 MB/s 00:07:20.497 01:06:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.497 01:06:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:20.497 01:06:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.497 01:06:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:20.497 01:06:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:20.497 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.497 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:20.497 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:20.497 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.497 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.755 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:20.755 { 00:07:20.755 "nbd_device": "/dev/nbd0", 00:07:20.755 "bdev_name": "Nvme0n1" 00:07:20.755 }, 00:07:20.755 { 00:07:20.755 "nbd_device": "/dev/nbd1", 00:07:20.755 "bdev_name": "Nvme1n1p1" 00:07:20.755 }, 00:07:20.755 { 00:07:20.755 "nbd_device": "/dev/nbd10", 00:07:20.755 "bdev_name": "Nvme1n1p2" 00:07:20.755 }, 00:07:20.755 { 00:07:20.755 "nbd_device": "/dev/nbd11", 00:07:20.755 "bdev_name": "Nvme2n1" 00:07:20.755 }, 00:07:20.755 { 00:07:20.755 "nbd_device": "/dev/nbd12", 00:07:20.755 "bdev_name": "Nvme2n2" 00:07:20.755 }, 00:07:20.755 { 00:07:20.755 "nbd_device": "/dev/nbd13", 00:07:20.755 "bdev_name": "Nvme2n3" 00:07:20.755 }, 00:07:20.755 { 00:07:20.755 "nbd_device": "/dev/nbd14", 00:07:20.755 "bdev_name": "Nvme3n1" 00:07:20.755 } 00:07:20.755 ]' 00:07:20.755 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:20.755 { 00:07:20.755 "nbd_device": "/dev/nbd0", 00:07:20.755 "bdev_name": "Nvme0n1" 00:07:20.755 }, 00:07:20.755 { 00:07:20.755 "nbd_device": "/dev/nbd1", 00:07:20.755 "bdev_name": "Nvme1n1p1" 00:07:20.755 }, 00:07:20.755 { 00:07:20.755 "nbd_device": "/dev/nbd10", 00:07:20.755 "bdev_name": "Nvme1n1p2" 00:07:20.755 }, 00:07:20.755 { 00:07:20.755 "nbd_device": "/dev/nbd11", 00:07:20.755 "bdev_name": "Nvme2n1" 00:07:20.755 }, 00:07:20.755 { 00:07:20.755 "nbd_device": "/dev/nbd12", 00:07:20.755 "bdev_name": "Nvme2n2" 00:07:20.755 }, 00:07:20.755 { 00:07:20.755 "nbd_device": "/dev/nbd13", 00:07:20.755 "bdev_name": "Nvme2n3" 00:07:20.755 }, 00:07:20.755 { 00:07:20.755 "nbd_device": "/dev/nbd14", 00:07:20.755 "bdev_name": "Nvme3n1" 00:07:20.755 } 00:07:20.755 ]' 00:07:20.755 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.755 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:20.755 /dev/nbd1 00:07:20.755 /dev/nbd10 00:07:20.755 /dev/nbd11 00:07:20.755 /dev/nbd12 00:07:20.755 /dev/nbd13 00:07:20.755 /dev/nbd14' 00:07:20.755 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:20.755 /dev/nbd1 00:07:20.755 /dev/nbd10 00:07:20.755 /dev/nbd11 00:07:20.755 /dev/nbd12 00:07:20.755 /dev/nbd13 00:07:20.755 /dev/nbd14' 00:07:20.755 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.755 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:20.755 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:20.755 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:20.755 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:20.755 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:20.755 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:20.755 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.755 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:20.755 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:20.755 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:20.755 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:20.755 256+0 records in 00:07:20.755 256+0 records out 00:07:20.755 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00433102 s, 242 MB/s 00:07:20.755 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.755 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:20.755 256+0 records in 00:07:20.755 256+0 records out 00:07:20.755 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0551968 s, 19.0 MB/s 00:07:20.755 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.755 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:21.013 256+0 records in 00:07:21.013 256+0 records out 00:07:21.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0593705 s, 17.7 MB/s 00:07:21.013 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.013 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:21.013 256+0 records in 00:07:21.013 256+0 records out 00:07:21.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0571376 s, 18.4 MB/s 00:07:21.013 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.013 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:21.013 256+0 records in 00:07:21.013 256+0 records out 00:07:21.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0590608 s, 17.8 MB/s 00:07:21.013 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.013 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:21.013 256+0 records in 00:07:21.013 256+0 records out 00:07:21.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0581276 s, 18.0 MB/s 00:07:21.013 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.013 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:21.272 256+0 records in 00:07:21.272 256+0 records out 00:07:21.272 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0600561 s, 17.5 MB/s 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:21.272 256+0 records in 00:07:21.272 256+0 records out 00:07:21.272 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.113977 s, 9.2 MB/s 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.272 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:21.530 01:06:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:21.530 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:21.530 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:21.530 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.530 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.530 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:21.530 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.531 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.531 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.531 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:21.788 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:21.788 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:21.788 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:21.788 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.788 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.788 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:21.788 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.788 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.788 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.788 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:22.045 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:22.045 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:22.045 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:22.045 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.045 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.045 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:22.045 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.045 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.045 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.045 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:22.045 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:22.045 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:22.045 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:22.045 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.045 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.045 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:22.045 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.045 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.045 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.045 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:22.303 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:22.303 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:22.303 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:22.303 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.303 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.303 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:22.303 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.303 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.303 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.303 01:06:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:22.561 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:22.561 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:22.561 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:22.561 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.561 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.561 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:22.561 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.561 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.561 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.561 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:22.819 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:22.819 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:22.819 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:22.819 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.819 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.819 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:22.819 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.819 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.819 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:22.819 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.819 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:23.076 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:23.076 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:23.076 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.076 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:23.076 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:23.076 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.076 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:23.076 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:23.076 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:23.076 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:23.076 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:23.076 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:23.076 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:23.076 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.076 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:23.076 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:23.076 malloc_lvol_verify 00:07:23.334 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:23.334 b7345447-d220-4609-a054-5d9a9d2f1e25 00:07:23.334 01:06:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:23.591 c7f1b32a-6f3b-445e-ba3c-f2336448434c 00:07:23.591 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:23.849 /dev/nbd0 00:07:23.849 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:23.849 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:23.849 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:23.849 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:23.849 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:23.849 mke2fs 1.47.0 (5-Feb-2023) 00:07:23.849 Discarding device blocks: 0/4096 done 00:07:23.849 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:23.849 00:07:23.849 Allocating group tables: 0/1 done 00:07:23.849 Writing inode tables: 0/1 done 00:07:23.849 Creating journal (1024 blocks): done 00:07:23.849 Writing superblocks and filesystem accounting information: 0/1 done 00:07:23.849 00:07:23.849 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:23.849 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.849 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:23.849 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:23.849 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:23.849 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.849 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:24.116 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:24.116 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:24.116 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:24.116 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.116 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.116 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:24.116 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:24.116 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.116 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74758 00:07:24.116 01:06:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74758 ']' 00:07:24.116 01:06:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74758 00:07:24.116 01:06:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:24.116 01:06:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.116 01:06:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74758 00:07:24.116 01:06:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.116 01:06:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.116 killing process with pid 74758 00:07:24.116 01:06:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74758' 00:07:24.116 01:06:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74758 00:07:24.116 01:06:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74758 00:07:24.116 01:06:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:24.116 00:07:24.116 real 0m9.847s 00:07:24.116 user 0m14.549s 00:07:24.116 sys 0m3.296s 00:07:24.116 01:06:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.116 ************************************ 00:07:24.116 END TEST bdev_nbd 00:07:24.116 ************************************ 00:07:24.116 01:06:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:24.390 skipping fio tests on NVMe due to multi-ns failures. 00:07:24.390 01:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:24.390 01:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:24.390 01:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:24.390 01:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:24.390 01:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:24.390 01:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:24.390 01:06:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:24.390 01:06:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.390 01:06:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:24.390 ************************************ 00:07:24.390 START TEST bdev_verify 00:07:24.390 ************************************ 00:07:24.390 01:06:57 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:24.390 [2024-12-14 01:06:57.822704] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:24.390 [2024-12-14 01:06:57.822812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75156 ] 00:07:24.390 [2024-12-14 01:06:57.959996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:24.390 [2024-12-14 01:06:57.980410] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.390 [2024-12-14 01:06:57.980507] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.956 Running I/O for 5 seconds... 00:07:27.265 19648.00 IOPS, 76.75 MiB/s [2024-12-14T01:07:01.812Z] 18560.00 IOPS, 72.50 MiB/s [2024-12-14T01:07:02.745Z] 19840.00 IOPS, 77.50 MiB/s [2024-12-14T01:07:03.676Z] 20752.00 IOPS, 81.06 MiB/s [2024-12-14T01:07:03.676Z] 20390.40 IOPS, 79.65 MiB/s 00:07:30.065 Latency(us) 00:07:30.065 [2024-12-14T01:07:03.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.065 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.065 Verification LBA range: start 0x0 length 0xbd0bd 00:07:30.065 Nvme0n1 : 5.06 1442.00 5.63 0.00 0.00 88541.83 14115.45 89532.26 00:07:30.065 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.065 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:30.065 Nvme0n1 : 5.06 1440.98 5.63 0.00 0.00 88597.29 16837.71 88322.36 00:07:30.065 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.065 Verification LBA range: start 0x0 length 0x4ff80 00:07:30.065 Nvme1n1p1 : 5.06 1441.10 5.63 0.00 0.00 88447.97 16131.94 86305.87 00:07:30.065 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.065 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:30.065 Nvme1n1p1 : 5.07 1439.80 5.62 0.00 0.00 88454.26 17442.66 78643.20 00:07:30.065 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.065 Verification LBA range: start 0x0 length 0x4ff7f 00:07:30.065 Nvme1n1p2 : 5.07 1440.37 5.63 0.00 0.00 88324.06 17543.48 85095.98 00:07:30.065 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.065 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:30.065 Nvme1n1p2 : 5.07 1439.11 5.62 0.00 0.00 88304.17 18955.03 73803.62 00:07:30.065 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.065 Verification LBA range: start 0x0 length 0x80000 00:07:30.065 Nvme2n1 : 5.07 1440.00 5.62 0.00 0.00 88210.70 19459.15 79449.80 00:07:30.065 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.065 Verification LBA range: start 0x80000 length 0x80000 00:07:30.065 Nvme2n1 : 5.07 1438.46 5.62 0.00 0.00 88142.79 20568.22 72593.72 00:07:30.065 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.065 Verification LBA range: start 0x0 length 0x80000 00:07:30.065 Nvme2n2 : 5.07 1439.52 5.62 0.00 0.00 88072.44 19257.50 84289.38 00:07:30.065 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.065 Verification LBA range: start 0x80000 length 0x80000 00:07:30.065 Nvme2n2 : 5.07 1438.09 5.62 0.00 0.00 88002.61 20265.75 74206.92 00:07:30.065 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.065 Verification LBA range: start 0x0 length 0x80000 00:07:30.065 Nvme2n3 : 5.07 1438.96 5.62 0.00 0.00 87905.07 17946.78 86709.17 00:07:30.065 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.065 Verification LBA range: start 0x80000 length 0x80000 00:07:30.065 Nvme2n3 : 5.08 1447.95 5.66 0.00 0.00 87322.25 4310.25 77836.60 00:07:30.065 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.065 Verification LBA range: start 0x0 length 0x20000 00:07:30.065 Nvme3n1 : 5.08 1449.44 5.66 0.00 0.00 87198.85 1714.02 89532.26 00:07:30.065 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.065 Verification LBA range: start 0x20000 length 0x20000 00:07:30.065 Nvme3n1 : 5.08 1447.57 5.65 0.00 0.00 87189.19 4713.55 79449.80 00:07:30.065 [2024-12-14T01:07:03.677Z] =================================================================================================================== 00:07:30.065 [2024-12-14T01:07:03.677Z] Total : 20183.35 78.84 0.00 0.00 88049.43 1714.02 89532.26 00:07:30.630 00:07:30.630 real 0m6.350s 00:07:30.630 user 0m12.027s 00:07:30.630 sys 0m0.206s 00:07:30.630 01:07:04 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.630 01:07:04 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:30.630 ************************************ 00:07:30.630 END TEST bdev_verify 00:07:30.630 ************************************ 00:07:30.630 01:07:04 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:30.630 01:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:30.630 01:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.630 01:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.630 ************************************ 00:07:30.630 START TEST bdev_verify_big_io 00:07:30.630 ************************************ 00:07:30.630 01:07:04 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:30.630 [2024-12-14 01:07:04.232543] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:30.630 [2024-12-14 01:07:04.232667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75244 ] 00:07:30.887 [2024-12-14 01:07:04.378851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:30.887 [2024-12-14 01:07:04.399603] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.887 [2024-12-14 01:07:04.399682] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.453 Running I/O for 5 seconds... 00:07:37.076 1511.00 IOPS, 94.44 MiB/s [2024-12-14T01:07:10.946Z] 3096.00 IOPS, 193.50 MiB/s [2024-12-14T01:07:10.946Z] 3136.00 IOPS, 196.00 MiB/s 00:07:37.334 Latency(us) 00:07:37.334 [2024-12-14T01:07:10.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.334 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.334 Verification LBA range: start 0x0 length 0xbd0b 00:07:37.334 Nvme0n1 : 5.77 110.83 6.93 0.00 0.00 1099277.55 20064.10 1206669.00 00:07:37.334 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.334 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:37.334 Nvme0n1 : 5.77 98.59 6.16 0.00 0.00 1242941.49 13409.67 1871304.86 00:07:37.334 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.334 Verification LBA range: start 0x0 length 0x4ff8 00:07:37.334 Nvme1n1p1 : 5.78 111.46 6.97 0.00 0.00 1063189.31 61301.37 1155046.79 00:07:37.334 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.334 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:37.334 Nvme1n1p1 : 5.77 101.85 6.37 0.00 0.00 1177670.26 35086.97 1897115.96 00:07:37.334 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.334 Verification LBA range: start 0x0 length 0x4ff7 00:07:37.334 Nvme1n1p2 : 5.85 106.82 6.68 0.00 0.00 1079843.57 105664.20 1845493.76 00:07:37.334 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.334 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:37.334 Nvme1n1p2 : 5.77 107.53 6.72 0.00 0.00 1076313.97 51622.20 1322818.95 00:07:37.334 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.334 Verification LBA range: start 0x0 length 0x8000 00:07:37.334 Nvme2n1 : 5.85 117.57 7.35 0.00 0.00 960382.01 71787.13 1013085.74 00:07:37.334 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.334 Verification LBA range: start 0x8000 length 0x8000 00:07:37.334 Nvme2n1 : 5.86 105.75 6.61 0.00 0.00 1064094.21 85095.98 1987454.82 00:07:37.334 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.334 Verification LBA range: start 0x0 length 0x8000 00:07:37.334 Nvme2n2 : 5.93 129.52 8.09 0.00 0.00 856099.05 35288.62 1038896.84 00:07:37.334 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.334 Verification LBA range: start 0x8000 length 0x8000 00:07:37.334 Nvme2n2 : 6.01 114.55 7.16 0.00 0.00 951934.22 62511.26 2013265.92 00:07:37.334 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.334 Verification LBA range: start 0x0 length 0x8000 00:07:37.334 Nvme2n3 : 5.97 132.92 8.31 0.00 0.00 805895.47 41539.74 1064707.94 00:07:37.334 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.334 Verification LBA range: start 0x8000 length 0x8000 00:07:37.334 Nvme2n3 : 6.06 124.00 7.75 0.00 0.00 857030.95 46580.97 2051982.57 00:07:37.334 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.334 Verification LBA range: start 0x0 length 0x2000 00:07:37.334 Nvme3n1 : 6.07 164.82 10.30 0.00 0.00 634085.02 768.79 1096971.82 00:07:37.334 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.334 Verification LBA range: start 0x2000 length 0x2000 00:07:37.334 Nvme3n1 : 6.09 144.72 9.04 0.00 0.00 715292.75 831.80 2090699.22 00:07:37.334 [2024-12-14T01:07:10.946Z] =================================================================================================================== 00:07:37.334 [2024-12-14T01:07:10.946Z] Total : 1670.92 104.43 0.00 0.00 942754.86 768.79 2090699.22 00:07:38.268 00:07:38.268 real 0m7.562s 00:07:38.268 user 0m14.449s 00:07:38.268 sys 0m0.211s 00:07:38.268 01:07:11 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.268 01:07:11 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:38.268 ************************************ 00:07:38.268 END TEST bdev_verify_big_io 00:07:38.268 ************************************ 00:07:38.268 01:07:11 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.268 01:07:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:38.268 01:07:11 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.268 01:07:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.268 ************************************ 00:07:38.268 START TEST bdev_write_zeroes 00:07:38.268 ************************************ 00:07:38.268 01:07:11 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.268 [2024-12-14 01:07:11.842228] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:38.268 [2024-12-14 01:07:11.842345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75343 ] 00:07:38.526 [2024-12-14 01:07:11.985221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.526 [2024-12-14 01:07:12.001882] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.785 Running I/O for 1 seconds... 00:07:40.162 71680.00 IOPS, 280.00 MiB/s 00:07:40.162 Latency(us) 00:07:40.162 [2024-12-14T01:07:13.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.162 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.162 Nvme0n1 : 1.03 10175.61 39.75 0.00 0.00 12552.23 8519.68 24399.56 00:07:40.162 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.162 Nvme1n1p1 : 1.03 10163.23 39.70 0.00 0.00 12552.90 8469.27 24399.56 00:07:40.162 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.162 Nvme1n1p2 : 1.03 10150.85 39.65 0.00 0.00 12541.67 8368.44 23592.96 00:07:40.162 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.162 Nvme2n1 : 1.03 10139.40 39.61 0.00 0.00 12518.07 8670.92 22887.19 00:07:40.162 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.162 Nvme2n2 : 1.03 10127.95 39.56 0.00 0.00 12510.00 8519.68 22483.89 00:07:40.162 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.162 Nvme2n3 : 1.03 10116.63 39.52 0.00 0.00 12495.99 7461.02 23088.84 00:07:40.162 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.162 Nvme3n1 : 1.03 10105.33 39.47 0.00 0.00 12483.42 6906.49 24702.03 00:07:40.162 [2024-12-14T01:07:13.774Z] =================================================================================================================== 00:07:40.162 [2024-12-14T01:07:13.774Z] Total : 70979.00 277.26 0.00 0.00 12522.04 6906.49 24702.03 00:07:40.162 00:07:40.162 real 0m1.808s 00:07:40.162 user 0m1.545s 00:07:40.162 sys 0m0.153s 00:07:40.162 01:07:13 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.162 ************************************ 00:07:40.162 END TEST bdev_write_zeroes 00:07:40.163 ************************************ 00:07:40.163 01:07:13 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:40.163 01:07:13 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.163 01:07:13 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:40.163 01:07:13 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.163 01:07:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.163 ************************************ 00:07:40.163 START TEST bdev_json_nonenclosed 00:07:40.163 ************************************ 00:07:40.163 01:07:13 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.163 [2024-12-14 01:07:13.717900] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:40.163 [2024-12-14 01:07:13.718036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75384 ] 00:07:40.423 [2024-12-14 01:07:13.867930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.423 [2024-12-14 01:07:13.899142] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.423 [2024-12-14 01:07:13.899248] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:40.423 [2024-12-14 01:07:13.899266] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:40.423 [2024-12-14 01:07:13.899279] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.423 00:07:40.423 real 0m0.330s 00:07:40.423 user 0m0.129s 00:07:40.423 sys 0m0.095s 00:07:40.423 01:07:13 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.423 01:07:13 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:40.423 ************************************ 00:07:40.423 END TEST bdev_json_nonenclosed 00:07:40.423 ************************************ 00:07:40.683 01:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.683 01:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:40.683 01:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.683 01:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.683 ************************************ 00:07:40.683 START TEST bdev_json_nonarray 00:07:40.683 ************************************ 00:07:40.683 01:07:14 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.683 [2024-12-14 01:07:14.117545] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:40.683 [2024-12-14 01:07:14.117708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75410 ] 00:07:40.683 [2024-12-14 01:07:14.266708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.683 [2024-12-14 01:07:14.291194] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.683 [2024-12-14 01:07:14.291298] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:40.683 [2024-12-14 01:07:14.291314] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:40.683 [2024-12-14 01:07:14.291325] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.941 00:07:40.941 real 0m0.306s 00:07:40.941 user 0m0.103s 00:07:40.941 sys 0m0.099s 00:07:40.941 01:07:14 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.941 ************************************ 00:07:40.941 END TEST bdev_json_nonarray 00:07:40.941 01:07:14 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:40.941 ************************************ 00:07:40.941 01:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:40.941 01:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:40.941 01:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:40.941 01:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.941 01:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.941 01:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.941 ************************************ 00:07:40.941 START TEST bdev_gpt_uuid 00:07:40.941 ************************************ 00:07:40.941 01:07:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:40.941 01:07:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:40.941 01:07:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:40.941 01:07:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75430 00:07:40.941 01:07:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:40.941 01:07:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 75430 00:07:40.941 01:07:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 75430 ']' 00:07:40.941 01:07:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.941 01:07:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.941 01:07:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.941 01:07:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.941 01:07:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:40.941 01:07:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:40.941 [2024-12-14 01:07:14.491014] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:40.942 [2024-12-14 01:07:14.491146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75430 ] 00:07:41.199 [2024-12-14 01:07:14.635492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.199 [2024-12-14 01:07:14.654608] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.765 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.765 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:41.765 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:41.765 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.765 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:42.332 Some configs were skipped because the RPC state that can call them passed over. 00:07:42.332 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.332 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:42.332 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.332 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:42.332 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.332 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:42.332 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.332 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:42.332 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.332 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:42.332 { 00:07:42.332 "name": "Nvme1n1p1", 00:07:42.332 "aliases": [ 00:07:42.332 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:42.332 ], 00:07:42.332 "product_name": "GPT Disk", 00:07:42.332 "block_size": 4096, 00:07:42.332 "num_blocks": 655104, 00:07:42.332 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:42.332 "assigned_rate_limits": { 00:07:42.332 "rw_ios_per_sec": 0, 00:07:42.332 "rw_mbytes_per_sec": 0, 00:07:42.332 "r_mbytes_per_sec": 0, 00:07:42.332 "w_mbytes_per_sec": 0 00:07:42.332 }, 00:07:42.332 "claimed": false, 00:07:42.332 "zoned": false, 00:07:42.332 "supported_io_types": { 00:07:42.332 "read": true, 00:07:42.332 "write": true, 00:07:42.332 "unmap": true, 00:07:42.332 "flush": true, 00:07:42.332 "reset": true, 00:07:42.332 "nvme_admin": false, 00:07:42.332 "nvme_io": false, 00:07:42.332 "nvme_io_md": false, 00:07:42.332 "write_zeroes": true, 00:07:42.332 "zcopy": false, 00:07:42.332 "get_zone_info": false, 00:07:42.332 "zone_management": false, 00:07:42.332 "zone_append": false, 00:07:42.332 "compare": true, 00:07:42.332 "compare_and_write": false, 00:07:42.332 "abort": true, 00:07:42.332 "seek_hole": false, 00:07:42.332 "seek_data": false, 00:07:42.332 "copy": true, 00:07:42.332 "nvme_iov_md": false 00:07:42.332 }, 00:07:42.332 "driver_specific": { 00:07:42.332 "gpt": { 00:07:42.332 "base_bdev": "Nvme1n1", 00:07:42.332 "offset_blocks": 256, 00:07:42.332 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:42.332 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:42.332 "partition_name": "SPDK_TEST_first" 00:07:42.332 } 00:07:42.332 } 00:07:42.332 } 00:07:42.332 ]' 00:07:42.332 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:42.332 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:42.332 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:42.332 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:42.332 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:42.332 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:42.332 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:42.332 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.332 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:42.333 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.333 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:42.333 { 00:07:42.333 "name": "Nvme1n1p2", 00:07:42.333 "aliases": [ 00:07:42.333 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:42.333 ], 00:07:42.333 "product_name": "GPT Disk", 00:07:42.333 "block_size": 4096, 00:07:42.333 "num_blocks": 655103, 00:07:42.333 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:42.333 "assigned_rate_limits": { 00:07:42.333 "rw_ios_per_sec": 0, 00:07:42.333 "rw_mbytes_per_sec": 0, 00:07:42.333 "r_mbytes_per_sec": 0, 00:07:42.333 "w_mbytes_per_sec": 0 00:07:42.333 }, 00:07:42.333 "claimed": false, 00:07:42.333 "zoned": false, 00:07:42.333 "supported_io_types": { 00:07:42.333 "read": true, 00:07:42.333 "write": true, 00:07:42.333 "unmap": true, 00:07:42.333 "flush": true, 00:07:42.333 "reset": true, 00:07:42.333 "nvme_admin": false, 00:07:42.333 "nvme_io": false, 00:07:42.333 "nvme_io_md": false, 00:07:42.333 "write_zeroes": true, 00:07:42.333 "zcopy": false, 00:07:42.333 "get_zone_info": false, 00:07:42.333 "zone_management": false, 00:07:42.333 "zone_append": false, 00:07:42.333 "compare": true, 00:07:42.333 "compare_and_write": false, 00:07:42.333 "abort": true, 00:07:42.333 "seek_hole": false, 00:07:42.333 "seek_data": false, 00:07:42.333 "copy": true, 00:07:42.333 "nvme_iov_md": false 00:07:42.333 }, 00:07:42.333 "driver_specific": { 00:07:42.333 "gpt": { 00:07:42.333 "base_bdev": "Nvme1n1", 00:07:42.333 "offset_blocks": 655360, 00:07:42.333 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:42.333 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:42.333 "partition_name": "SPDK_TEST_second" 00:07:42.333 } 00:07:42.333 } 00:07:42.333 } 00:07:42.333 ]' 00:07:42.333 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:42.333 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:42.333 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:42.333 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:42.333 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:42.333 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:42.333 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 75430 00:07:42.333 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 75430 ']' 00:07:42.333 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 75430 00:07:42.333 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:42.333 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.333 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75430 00:07:42.333 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.333 killing process with pid 75430 00:07:42.333 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.333 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75430' 00:07:42.333 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 75430 00:07:42.333 01:07:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 75430 00:07:42.599 00:07:42.599 real 0m1.769s 00:07:42.599 user 0m1.964s 00:07:42.599 sys 0m0.326s 00:07:42.599 01:07:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.599 ************************************ 00:07:42.599 END TEST bdev_gpt_uuid 00:07:42.599 ************************************ 00:07:42.599 01:07:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:42.857 01:07:16 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:42.857 01:07:16 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:42.857 01:07:16 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:42.857 01:07:16 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:42.857 01:07:16 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:42.857 01:07:16 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:42.857 01:07:16 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:42.857 01:07:16 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:42.857 01:07:16 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:43.115 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:43.115 Waiting for block devices as requested 00:07:43.115 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:43.373 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:43.373 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:43.373 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:48.638 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:48.638 01:07:22 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:48.638 01:07:22 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:48.899 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:48.899 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:48.899 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:48.899 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:48.899 01:07:22 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:48.899 ************************************ 00:07:48.899 END TEST blockdev_nvme_gpt 00:07:48.899 ************************************ 00:07:48.899 00:07:48.899 real 0m49.472s 00:07:48.899 user 1m2.471s 00:07:48.899 sys 0m7.133s 00:07:48.899 01:07:22 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.899 01:07:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:48.899 01:07:22 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:48.899 01:07:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.899 01:07:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.899 01:07:22 -- common/autotest_common.sh@10 -- # set +x 00:07:48.899 ************************************ 00:07:48.899 START TEST nvme 00:07:48.899 ************************************ 00:07:48.899 01:07:22 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:48.899 * Looking for test storage... 00:07:48.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:48.899 01:07:22 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:48.899 01:07:22 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:48.899 01:07:22 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:49.161 01:07:22 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:49.161 01:07:22 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.161 01:07:22 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.161 01:07:22 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.161 01:07:22 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.161 01:07:22 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.161 01:07:22 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.161 01:07:22 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.161 01:07:22 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.161 01:07:22 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.161 01:07:22 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.161 01:07:22 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.161 01:07:22 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:49.161 01:07:22 nvme -- scripts/common.sh@345 -- # : 1 00:07:49.161 01:07:22 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.161 01:07:22 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.161 01:07:22 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:49.161 01:07:22 nvme -- scripts/common.sh@353 -- # local d=1 00:07:49.161 01:07:22 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.161 01:07:22 nvme -- scripts/common.sh@355 -- # echo 1 00:07:49.161 01:07:22 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.161 01:07:22 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:49.161 01:07:22 nvme -- scripts/common.sh@353 -- # local d=2 00:07:49.161 01:07:22 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.161 01:07:22 nvme -- scripts/common.sh@355 -- # echo 2 00:07:49.161 01:07:22 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.161 01:07:22 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.161 01:07:22 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.161 01:07:22 nvme -- scripts/common.sh@368 -- # return 0 00:07:49.161 01:07:22 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.161 01:07:22 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:49.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.161 --rc genhtml_branch_coverage=1 00:07:49.161 --rc genhtml_function_coverage=1 00:07:49.161 --rc genhtml_legend=1 00:07:49.161 --rc geninfo_all_blocks=1 00:07:49.161 --rc geninfo_unexecuted_blocks=1 00:07:49.161 00:07:49.161 ' 00:07:49.161 01:07:22 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:49.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.161 --rc genhtml_branch_coverage=1 00:07:49.161 --rc genhtml_function_coverage=1 00:07:49.161 --rc genhtml_legend=1 00:07:49.161 --rc geninfo_all_blocks=1 00:07:49.161 --rc geninfo_unexecuted_blocks=1 00:07:49.161 00:07:49.161 ' 00:07:49.161 01:07:22 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:49.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.161 --rc genhtml_branch_coverage=1 00:07:49.161 --rc genhtml_function_coverage=1 00:07:49.161 --rc genhtml_legend=1 00:07:49.161 --rc geninfo_all_blocks=1 00:07:49.161 --rc geninfo_unexecuted_blocks=1 00:07:49.161 00:07:49.161 ' 00:07:49.161 01:07:22 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:49.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.161 --rc genhtml_branch_coverage=1 00:07:49.161 --rc genhtml_function_coverage=1 00:07:49.161 --rc genhtml_legend=1 00:07:49.161 --rc geninfo_all_blocks=1 00:07:49.161 --rc geninfo_unexecuted_blocks=1 00:07:49.161 00:07:49.161 ' 00:07:49.161 01:07:22 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:49.422 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:50.367 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.367 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.367 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.367 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.367 01:07:23 nvme -- nvme/nvme.sh@79 -- # uname 00:07:50.367 Waiting for stub to ready for secondary processes... 00:07:50.367 01:07:23 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:50.367 01:07:23 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:50.367 01:07:23 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:50.367 01:07:23 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:50.367 01:07:23 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:50.367 01:07:23 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:50.367 01:07:23 nvme -- common/autotest_common.sh@1075 -- # stubpid=76056 00:07:50.367 01:07:23 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:50.367 01:07:23 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:50.367 01:07:23 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/76056 ]] 00:07:50.367 01:07:23 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:50.367 01:07:23 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:50.367 [2024-12-14 01:07:23.783697] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:50.367 [2024-12-14 01:07:23.783838] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:51.305 [2024-12-14 01:07:24.745552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.305 01:07:24 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:51.305 01:07:24 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/76056 ]] 00:07:51.305 01:07:24 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:51.305 [2024-12-14 01:07:24.758288] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.305 [2024-12-14 01:07:24.758555] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.305 [2024-12-14 01:07:24.758602] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.305 [2024-12-14 01:07:24.769837] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:51.305 [2024-12-14 01:07:24.769887] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:51.305 [2024-12-14 01:07:24.782740] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:51.305 [2024-12-14 01:07:24.782902] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:51.305 [2024-12-14 01:07:24.783818] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:51.305 [2024-12-14 01:07:24.784104] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:51.305 [2024-12-14 01:07:24.784213] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:51.305 [2024-12-14 01:07:24.785542] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:51.305 [2024-12-14 01:07:24.785764] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:51.305 [2024-12-14 01:07:24.785823] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:51.305 [2024-12-14 01:07:24.789168] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:51.305 [2024-12-14 01:07:24.789340] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:51.305 [2024-12-14 01:07:24.789400] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:51.305 [2024-12-14 01:07:24.789439] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:51.305 [2024-12-14 01:07:24.789475] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:52.238 done. 00:07:52.238 01:07:25 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:52.238 01:07:25 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:52.238 01:07:25 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:52.238 01:07:25 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:52.238 01:07:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.238 01:07:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.238 ************************************ 00:07:52.238 START TEST nvme_reset 00:07:52.238 ************************************ 00:07:52.238 01:07:25 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:52.496 Initializing NVMe Controllers 00:07:52.496 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:52.496 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:52.496 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:52.496 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:52.496 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:52.496 00:07:52.496 real 0m0.175s 00:07:52.496 user 0m0.068s 00:07:52.496 sys 0m0.071s 00:07:52.496 ************************************ 00:07:52.496 END TEST nvme_reset 00:07:52.496 ************************************ 00:07:52.496 01:07:25 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.496 01:07:25 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:52.497 01:07:26 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:52.497 01:07:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.497 01:07:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.497 01:07:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.497 ************************************ 00:07:52.497 START TEST nvme_identify 00:07:52.497 ************************************ 00:07:52.497 01:07:26 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:52.497 01:07:26 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:52.497 01:07:26 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:52.497 01:07:26 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:52.497 01:07:26 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:52.497 01:07:26 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:52.497 01:07:26 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:52.497 01:07:26 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:52.497 01:07:26 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:52.497 01:07:26 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:52.497 01:07:26 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:52.497 01:07:26 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:52.497 01:07:26 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:52.758 [2024-12-14 01:07:26.222521] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 76089 terminated unexpected 00:07:52.758 ===================================================== 00:07:52.758 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:52.758 ===================================================== 00:07:52.758 Controller Capabilities/Features 00:07:52.758 ================================ 00:07:52.758 Vendor ID: 1b36 00:07:52.758 Subsystem Vendor ID: 1af4 00:07:52.758 Serial Number: 12343 00:07:52.758 Model Number: QEMU NVMe Ctrl 00:07:52.758 Firmware Version: 8.0.0 00:07:52.758 Recommended Arb Burst: 6 00:07:52.758 IEEE OUI Identifier: 00 54 52 00:07:52.758 Multi-path I/O 00:07:52.758 May have multiple subsystem ports: No 00:07:52.758 May have multiple controllers: Yes 00:07:52.758 Associated with SR-IOV VF: No 00:07:52.758 Max Data Transfer Size: 524288 00:07:52.758 Max Number of Namespaces: 256 00:07:52.758 Max Number of I/O Queues: 64 00:07:52.758 NVMe Specification Version (VS): 1.4 00:07:52.758 NVMe Specification Version (Identify): 1.4 00:07:52.758 Maximum Queue Entries: 2048 00:07:52.758 Contiguous Queues Required: Yes 00:07:52.758 Arbitration Mechanisms Supported 00:07:52.758 Weighted Round Robin: Not Supported 00:07:52.758 Vendor Specific: Not Supported 00:07:52.758 Reset Timeout: 7500 ms 00:07:52.758 Doorbell Stride: 4 bytes 00:07:52.758 NVM Subsystem Reset: Not Supported 00:07:52.758 Command Sets Supported 00:07:52.758 NVM Command Set: Supported 00:07:52.758 Boot Partition: Not Supported 00:07:52.758 Memory Page Size Minimum: 4096 bytes 00:07:52.758 Memory Page Size Maximum: 65536 bytes 00:07:52.758 Persistent Memory Region: Not Supported 00:07:52.758 Optional Asynchronous Events Supported 00:07:52.758 Namespace Attribute Notices: Supported 00:07:52.758 Firmware Activation Notices: Not Supported 00:07:52.758 ANA Change Notices: Not Supported 00:07:52.758 PLE Aggregate Log Change Notices: Not Supported 00:07:52.758 LBA Status Info Alert Notices: Not Supported 00:07:52.758 EGE Aggregate Log Change Notices: Not Supported 00:07:52.758 Normal NVM Subsystem Shutdown event: Not Supported 00:07:52.758 Zone Descriptor Change Notices: Not Supported 00:07:52.758 Discovery Log Change Notices: Not Supported 00:07:52.758 Controller Attributes 00:07:52.758 128-bit Host Identifier: Not Supported 00:07:52.758 Non-Operational Permissive Mode: Not Supported 00:07:52.758 NVM Sets: Not Supported 00:07:52.758 Read Recovery Levels: Not Supported 00:07:52.758 Endurance Groups: Supported 00:07:52.758 Predictable Latency Mode: Not Supported 00:07:52.758 Traffic Based Keep ALive: Not Supported 00:07:52.758 Namespace Granularity: Not Supported 00:07:52.758 SQ Associations: Not Supported 00:07:52.758 UUID List: Not Supported 00:07:52.758 Multi-Domain Subsystem: Not Supported 00:07:52.758 Fixed Capacity Management: Not Supported 00:07:52.758 Variable Capacity Management: Not Supported 00:07:52.758 Delete Endurance Group: Not Supported 00:07:52.758 Delete NVM Set: Not Supported 00:07:52.758 Extended LBA Formats Supported: Supported 00:07:52.758 Flexible Data Placement Supported: Supported 00:07:52.758 00:07:52.758 Controller Memory Buffer Support 00:07:52.758 ================================ 00:07:52.758 Supported: No 00:07:52.758 00:07:52.758 Persistent Memory Region Support 00:07:52.758 ================================ 00:07:52.758 Supported: No 00:07:52.758 00:07:52.758 Admin Command Set Attributes 00:07:52.758 ============================ 00:07:52.758 Security Send/Receive: Not Supported 00:07:52.758 Format NVM: Supported 00:07:52.758 Firmware Activate/Download: Not Supported 00:07:52.758 Namespace Management: Supported 00:07:52.758 Device Self-Test: Not Supported 00:07:52.758 Directives: Supported 00:07:52.758 NVMe-MI: Not Supported 00:07:52.758 Virtualization Management: Not Supported 00:07:52.758 Doorbell Buffer Config: Supported 00:07:52.758 Get LBA Status Capability: Not Supported 00:07:52.758 Command & Feature Lockdown Capability: Not Supported 00:07:52.758 Abort Command Limit: 4 00:07:52.758 Async Event Request Limit: 4 00:07:52.758 Number of Firmware Slots: N/A 00:07:52.758 Firmware Slot 1 Read-Only: N/A 00:07:52.758 Firmware Activation Without Reset: N/A 00:07:52.758 Multiple Update Detection Support: N/A 00:07:52.758 Firmware Update Granularity: No Information Provided 00:07:52.758 Per-Namespace SMART Log: Yes 00:07:52.758 Asymmetric Namespace Access Log Page: Not Supported 00:07:52.758 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:52.758 Command Effects Log Page: Supported 00:07:52.758 Get Log Page Extended Data: Supported 00:07:52.758 Telemetry Log Pages: Not Supported 00:07:52.758 Persistent Event Log Pages: Not Supported 00:07:52.758 Supported Log Pages Log Page: May Support 00:07:52.758 Commands Supported & Effects Log Page: Not Supported 00:07:52.758 Feature Identifiers & Effects Log Page:May Support 00:07:52.758 NVMe-MI Commands & Effects Log Page: May Support 00:07:52.758 Data Area 4 for Telemetry Log: Not Supported 00:07:52.758 Error Log Page Entries Supported: 1 00:07:52.758 Keep Alive: Not Supported 00:07:52.758 00:07:52.758 NVM Command Set Attributes 00:07:52.758 ========================== 00:07:52.758 Submission Queue Entry Size 00:07:52.758 Max: 64 00:07:52.758 Min: 64 00:07:52.758 Completion Queue Entry Size 00:07:52.758 Max: 16 00:07:52.758 Min: 16 00:07:52.758 Number of Namespaces: 256 00:07:52.758 Compare Command: Supported 00:07:52.758 Write Uncorrectable Command: Not Supported 00:07:52.758 Dataset Management Command: Supported 00:07:52.758 Write Zeroes Command: Supported 00:07:52.758 Set Features Save Field: Supported 00:07:52.758 Reservations: Not Supported 00:07:52.758 Timestamp: Supported 00:07:52.758 Copy: Supported 00:07:52.758 Volatile Write Cache: Present 00:07:52.758 Atomic Write Unit (Normal): 1 00:07:52.758 Atomic Write Unit (PFail): 1 00:07:52.758 Atomic Compare & Write Unit: 1 00:07:52.758 Fused Compare & Write: Not Supported 00:07:52.758 Scatter-Gather List 00:07:52.758 SGL Command Set: Supported 00:07:52.758 SGL Keyed: Not Supported 00:07:52.758 SGL Bit Bucket Descriptor: Not Supported 00:07:52.758 SGL Metadata Pointer: Not Supported 00:07:52.758 Oversized SGL: Not Supported 00:07:52.758 SGL Metadata Address: Not Supported 00:07:52.758 SGL Offset: Not Supported 00:07:52.758 Transport SGL Data Block: Not Supported 00:07:52.758 Replay Protected Memory Block: Not Supported 00:07:52.758 00:07:52.758 Firmware Slot Information 00:07:52.758 ========================= 00:07:52.758 Active slot: 1 00:07:52.758 Slot 1 Firmware Revision: 1.0 00:07:52.758 00:07:52.758 00:07:52.758 Commands Supported and Effects 00:07:52.758 ============================== 00:07:52.758 Admin Commands 00:07:52.758 -------------- 00:07:52.758 Delete I/O Submission Queue (00h): Supported 00:07:52.758 Create I/O Submission Queue (01h): Supported 00:07:52.758 Get Log Page (02h): Supported 00:07:52.758 Delete I/O Completion Queue (04h): Supported 00:07:52.758 Create I/O Completion Queue (05h): Supported 00:07:52.758 Identify (06h): Supported 00:07:52.758 Abort (08h): Supported 00:07:52.759 Set Features (09h): Supported 00:07:52.759 Get Features (0Ah): Supported 00:07:52.759 Asynchronous Event Request (0Ch): Supported 00:07:52.759 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:52.759 Directive Send (19h): Supported 00:07:52.759 Directive Receive (1Ah): Supported 00:07:52.759 Virtualization Management (1Ch): Supported 00:07:52.759 Doorbell Buffer Config (7Ch): Supported 00:07:52.759 Format NVM (80h): Supported LBA-Change 00:07:52.759 I/O Commands 00:07:52.759 ------------ 00:07:52.759 Flush (00h): Supported LBA-Change 00:07:52.759 Write (01h): Supported LBA-Change 00:07:52.759 Read (02h): Supported 00:07:52.759 Compare (05h): Supported 00:07:52.759 Write Zeroes (08h): Supported LBA-Change 00:07:52.759 Dataset Management (09h): Supported LBA-Change 00:07:52.759 Unknown (0Ch): Supported 00:07:52.759 Unknown (12h): Supported 00:07:52.759 Copy (19h): Supported LBA-Change 00:07:52.759 Unknown (1Dh): Supported LBA-Change 00:07:52.759 00:07:52.759 Error Log 00:07:52.759 ========= 00:07:52.759 00:07:52.759 Arbitration 00:07:52.759 =========== 00:07:52.759 Arbitration Burst: no limit 00:07:52.759 00:07:52.759 Power Management 00:07:52.759 ================ 00:07:52.759 Number of Power States: 1 00:07:52.759 Current Power State: Power State #0 00:07:52.759 Power State #0: 00:07:52.759 Max Power: 25.00 W 00:07:52.759 Non-Operational State: Operational 00:07:52.759 Entry Latency: 16 microseconds 00:07:52.759 Exit Latency: 4 microseconds 00:07:52.759 Relative Read Throughput: 0 00:07:52.759 Relative Read Latency: 0 00:07:52.759 Relative Write Throughput: 0 00:07:52.759 Relative Write Latency: 0 00:07:52.759 Idle Power: Not Reported 00:07:52.759 Active Power: Not Reported 00:07:52.759 Non-Operational Permissive Mode: Not Supported 00:07:52.759 00:07:52.759 Health Information 00:07:52.759 ================== 00:07:52.759 Critical Warnings: 00:07:52.759 Available Spare Space: OK 00:07:52.759 Temperature: OK 00:07:52.759 Device Reliability: OK 00:07:52.759 Read Only: No 00:07:52.759 Volatile Memory Backup: OK 00:07:52.759 Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.759 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:52.759 Available Spare: 0% 00:07:52.759 Available Spare Threshold: 0% 00:07:52.759 Life Percentage Used: 0% 00:07:52.759 Data Units Read: 854 00:07:52.759 Data Units Written: 783 00:07:52.759 Host Read Commands: 40770 00:07:52.759 Host Write Commands: 40194 00:07:52.759 Controller Busy Time: 0 minutes 00:07:52.759 Power Cycles: 0 00:07:52.759 Power On Hours: 0 hours 00:07:52.759 Unsafe Shutdowns: 0 00:07:52.759 Unrecoverable Media Errors: 0 00:07:52.759 Lifetime Error Log Entries: 0 00:07:52.759 Warning Temperature Time: 0 minutes 00:07:52.759 Critical Temperature Time: 0 minutes 00:07:52.759 00:07:52.759 Number of Queues 00:07:52.759 ================ 00:07:52.759 Number of I/O Submission Queues: 64 00:07:52.759 Number of I/O Completion Queues: 64 00:07:52.759 00:07:52.759 ZNS Specific Controller Data 00:07:52.759 ============================ 00:07:52.759 Zone Append Size Limit: 0 00:07:52.759 00:07:52.759 00:07:52.759 Active Namespaces 00:07:52.759 ================= 00:07:52.759 Namespace ID:1 00:07:52.759 Error Recovery Timeout: Unlimited 00:07:52.759 Command Set Identifier: NVM (00h) 00:07:52.759 Deallocate: Supported 00:07:52.759 Deallocated/Unwritten Error: Supported 00:07:52.759 Deallocated Read Value: All 0x00 00:07:52.759 Deallocate in Write Zeroes: Not Supported 00:07:52.759 Deallocated Guard Field: 0xFFFF 00:07:52.759 Flush: Supported 00:07:52.759 Reservation: Not Supported 00:07:52.759 Namespace Sharing Capabilities: Multiple Controllers 00:07:52.759 Size (in LBAs): 262144 (1GiB) 00:07:52.759 Capacity (in LBAs): 262144 (1GiB) 00:07:52.759 Utilization (in LBAs): 262144 (1GiB) 00:07:52.759 Thin Provisioning: Not Supported 00:07:52.759 Per-NS Atomic Units: No 00:07:52.759 Maximum Single Source Range Length: 128 00:07:52.759 Maximum Copy Length: 128 00:07:52.759 Maximum Source Range Count: 128 00:07:52.759 NGUID/EUI64 Never Reused: No 00:07:52.759 Namespace Write Protected: No 00:07:52.759 Endurance group ID: 1 00:07:52.759 Number of LBA Formats: 8 00:07:52.759 Current LBA Format: LBA Format #04 00:07:52.759 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.759 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.759 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.759 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.759 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.759 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.759 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.759 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.759 00:07:52.759 Get Feature FDP: 00:07:52.759 ================ 00:07:52.759 Enabled: Yes 00:07:52.759 FDP configuration index: 0 00:07:52.759 00:07:52.759 FDP configurations log page 00:07:52.759 =========================== 00:07:52.759 Number of FDP configurations: 1 00:07:52.759 Version: 0 00:07:52.759 Size: 112 00:07:52.759 FDP Configuration Descriptor: 0 00:07:52.759 Descriptor Size: 96 00:07:52.759 Reclaim Group Identifier format: 2 00:07:52.759 FDP Volatile Write Cache: Not Present 00:07:52.759 FDP Configuration: Valid 00:07:52.759 Vendor Specific Size: 0 00:07:52.759 Number of Reclaim Groups: 2 00:07:52.759 Number of Recalim Unit Handles: 8 00:07:52.759 Max Placement Identifiers: 128 00:07:52.759 Number of Namespaces Suppprted: 256 00:07:52.759 Reclaim unit Nominal Size: 6000000 bytes 00:07:52.759 Estimated Reclaim Unit Time Limit: Not Reported 00:07:52.759 RUH Desc #000: RUH Type: Initially Isolated 00:07:52.759 RUH Desc #001: RUH Type: Initially Isolated 00:07:52.759 RUH Desc #002: RUH Type: Initially Isolated 00:07:52.759 RUH Desc #003: RUH Type: Initially Isolated 00:07:52.759 RUH Desc #004: RUH Type: Initially Isolated 00:07:52.759 RUH Desc #005: RUH Type: Initially Isolated 00:07:52.759 RUH Desc #006: RUH Type: Initially Isolated 00:07:52.759 RUH Desc #007: RUH Type: Initially Isolated 00:07:52.759 00:07:52.759 FDP reclaim unit handle usage log page 00:07:52.759 ==================================[2024-12-14 01:07:26.224877] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 76089 terminated unexpected 00:07:52.759 ==== 00:07:52.759 Number of Reclaim Unit Handles: 8 00:07:52.759 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:52.759 RUH Usage Desc #001: RUH Attributes: Unused 00:07:52.759 RUH Usage Desc #002: RUH Attributes: Unused 00:07:52.759 RUH Usage Desc #003: RUH Attributes: Unused 00:07:52.759 RUH Usage Desc #004: RUH Attributes: Unused 00:07:52.759 RUH Usage Desc #005: RUH Attributes: Unused 00:07:52.759 RUH Usage Desc #006: RUH Attributes: Unused 00:07:52.759 RUH Usage Desc #007: RUH Attributes: Unused 00:07:52.759 00:07:52.759 FDP statistics log page 00:07:52.759 ======================= 00:07:52.759 Host bytes with metadata written: 512139264 00:07:52.759 Media bytes with metadata written: 512196608 00:07:52.759 Media bytes erased: 0 00:07:52.759 00:07:52.759 FDP events log page 00:07:52.759 =================== 00:07:52.759 Number of FDP events: 0 00:07:52.759 00:07:52.759 NVM Specific Namespace Data 00:07:52.759 =========================== 00:07:52.759 Logical Block Storage Tag Mask: 0 00:07:52.759 Protection Information Capabilities: 00:07:52.759 16b Guard Protection Information Storage Tag Support: No 00:07:52.759 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.759 Storage Tag Check Read Support: No 00:07:52.759 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.759 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.759 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.759 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.759 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.759 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.759 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.759 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.759 ===================================================== 00:07:52.759 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:52.759 ===================================================== 00:07:52.759 Controller Capabilities/Features 00:07:52.759 ================================ 00:07:52.759 Vendor ID: 1b36 00:07:52.759 Subsystem Vendor ID: 1af4 00:07:52.759 Serial Number: 12340 00:07:52.759 Model Number: QEMU NVMe Ctrl 00:07:52.759 Firmware Version: 8.0.0 00:07:52.759 Recommended Arb Burst: 6 00:07:52.759 IEEE OUI Identifier: 00 54 52 00:07:52.759 Multi-path I/O 00:07:52.759 May have multiple subsystem ports: No 00:07:52.759 May have multiple controllers: No 00:07:52.759 Associated with SR-IOV VF: No 00:07:52.759 Max Data Transfer Size: 524288 00:07:52.759 Max Number of Namespaces: 256 00:07:52.759 Max Number of I/O Queues: 64 00:07:52.760 NVMe Specification Version (VS): 1.4 00:07:52.760 NVMe Specification Version (Identify): 1.4 00:07:52.760 Maximum Queue Entries: 2048 00:07:52.760 Contiguous Queues Required: Yes 00:07:52.760 Arbitration Mechanisms Supported 00:07:52.760 Weighted Round Robin: Not Supported 00:07:52.760 Vendor Specific: Not Supported 00:07:52.760 Reset Timeout: 7500 ms 00:07:52.760 Doorbell Stride: 4 bytes 00:07:52.760 NVM Subsystem Reset: Not Supported 00:07:52.760 Command Sets Supported 00:07:52.760 NVM Command Set: Supported 00:07:52.760 Boot Partition: Not Supported 00:07:52.760 Memory Page Size Minimum: 4096 bytes 00:07:52.760 Memory Page Size Maximum: 65536 bytes 00:07:52.760 Persistent Memory Region: Not Supported 00:07:52.760 Optional Asynchronous Events Supported 00:07:52.760 Namespace Attribute Notices: Supported 00:07:52.760 Firmware Activation Notices: Not Supported 00:07:52.760 ANA Change Notices: Not Supported 00:07:52.760 PLE Aggregate Log Change Notices: Not Supported 00:07:52.760 LBA Status Info Alert Notices: Not Supported 00:07:52.760 EGE Aggregate Log Change Notices: Not Supported 00:07:52.760 Normal NVM Subsystem Shutdown event: Not Supported 00:07:52.760 Zone Descriptor Change Notices: Not Supported 00:07:52.760 Discovery Log Change Notices: Not Supported 00:07:52.760 Controller Attributes 00:07:52.760 128-bit Host Identifier: Not Supported 00:07:52.760 Non-Operational Permissive Mode: Not Supported 00:07:52.760 NVM Sets: Not Supported 00:07:52.760 Read Recovery Levels: Not Supported 00:07:52.760 Endurance Groups: Not Supported 00:07:52.760 Predictable Latency Mode: Not Supported 00:07:52.760 Traffic Based Keep ALive: Not Supported 00:07:52.760 Namespace Granularity: Not Supported 00:07:52.760 SQ Associations: Not Supported 00:07:52.760 UUID List: Not Supported 00:07:52.760 Multi-Domain Subsystem: Not Supported 00:07:52.760 Fixed Capacity Management: Not Supported 00:07:52.760 Variable Capacity Management: Not Supported 00:07:52.760 Delete Endurance Group: Not Supported 00:07:52.760 Delete NVM Set: Not Supported 00:07:52.760 Extended LBA Formats Supported: Supported 00:07:52.760 Flexible Data Placement Supported: Not Supported 00:07:52.760 00:07:52.760 Controller Memory Buffer Support 00:07:52.760 ================================ 00:07:52.760 Supported: No 00:07:52.760 00:07:52.760 Persistent Memory Region Support 00:07:52.760 ================================ 00:07:52.760 Supported: No 00:07:52.760 00:07:52.760 Admin Command Set Attributes 00:07:52.760 ============================ 00:07:52.760 Security Send/Receive: Not Supported 00:07:52.760 Format NVM: Supported 00:07:52.760 Firmware Activate/Download: Not Supported 00:07:52.760 Namespace Management: Supported 00:07:52.760 Device Self-Test: Not Supported 00:07:52.760 Directives: Supported 00:07:52.760 NVMe-MI: Not Supported 00:07:52.760 Virtualization Management: Not Supported 00:07:52.760 Doorbell Buffer Config: Supported 00:07:52.760 Get LBA Status Capability: Not Supported 00:07:52.760 Command & Feature Lockdown Capability: Not Supported 00:07:52.760 Abort Command Limit: 4 00:07:52.760 Async Event Request Limit: 4 00:07:52.760 Number of Firmware Slots: N/A 00:07:52.760 Firmware Slot 1 Read-Only: N/A 00:07:52.760 Firmware Activation Without Reset: N/A 00:07:52.760 Multiple Update Detection Support: N/A 00:07:52.760 Firmware Update Granularity: No Information Provided 00:07:52.760 Per-Namespace SMART Log: Yes 00:07:52.760 Asymmetric Namespace Access Log Page: Not Supported 00:07:52.760 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:52.760 Command Effects Log Page: Supported 00:07:52.760 Get Log Page Extended Data: Supported 00:07:52.760 Telemetry Log Pages: Not Supported 00:07:52.760 Persistent Event Log Pages: Not Supported 00:07:52.760 Supported Log Pages Log Page: May Support 00:07:52.760 Commands Supported & Effects Log Page: Not Supported 00:07:52.760 Feature Identifiers & Effects Log Page:May Support 00:07:52.760 NVMe-MI Commands & Effects Log Page: May Support 00:07:52.760 Data Area 4 for Telemetry Log: Not Supported 00:07:52.760 Error Log Page Entries Supported: 1 00:07:52.760 Keep Alive: Not Supported 00:07:52.760 00:07:52.760 NVM Command Set Attributes 00:07:52.760 ========================== 00:07:52.760 Submission Queue Entry Size 00:07:52.760 Max: 64 00:07:52.760 Min: 64 00:07:52.760 Completion Queue Entry Size 00:07:52.760 Max: 16 00:07:52.760 Min: 16 00:07:52.760 Number of Namespaces: 256 00:07:52.760 Compare Command: Supported 00:07:52.760 Write Uncorrectable Command: Not Supported 00:07:52.760 Dataset Management Command: Supported 00:07:52.760 Write Zeroes Command: Supported 00:07:52.760 Set Features Save Field: Supported 00:07:52.760 Reservations: Not Supported 00:07:52.760 Timestamp: Supported 00:07:52.760 Copy: Supported 00:07:52.760 Volatile Write Cache: Present 00:07:52.760 Atomic Write Unit (Normal): 1 00:07:52.760 Atomic Write Unit (PFail): 1 00:07:52.760 Atomic Compare & Write Unit: 1 00:07:52.760 Fused Compare & Write: Not Supported 00:07:52.760 Scatter-Gather List 00:07:52.760 SGL Command Set: Supported 00:07:52.760 SGL Keyed: Not Supported 00:07:52.760 SGL Bit Bucket Descriptor: Not Supported 00:07:52.760 SGL Metadata Pointer: Not Supported 00:07:52.760 Oversized SGL: Not Supported 00:07:52.760 SGL Metadata Address: Not Supported 00:07:52.760 SGL Offset: Not Supported 00:07:52.760 Transport SGL Data Block: Not Supported 00:07:52.760 Replay Protected Memory Block: Not Supported 00:07:52.760 00:07:52.760 Firmware Slot Information 00:07:52.760 ========================= 00:07:52.760 Active slot: 1 00:07:52.760 Slot 1 Firmware Revision: 1.0 00:07:52.760 00:07:52.760 00:07:52.760 Commands Supported and Effects 00:07:52.760 ============================== 00:07:52.760 Admin Commands 00:07:52.760 -------------- 00:07:52.760 Delete I/O Submission Queue (00h): Supported 00:07:52.760 Create I/O Submission Queue (01h): Supported 00:07:52.760 Get Log Page (02h): Supported 00:07:52.760 Delete I/O Completion Queue (04h): Supported 00:07:52.760 Create I/O Completion Queue (05h): Supported 00:07:52.760 Identify (06h): Supported 00:07:52.760 Abort (08h): Supported 00:07:52.760 Set Features (09h): Supported 00:07:52.760 Get Features (0Ah): Supported 00:07:52.760 Asynchronous Event Request (0Ch): Supported 00:07:52.760 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:52.760 Directive Send (19h): Supported 00:07:52.760 Directive Receive (1Ah): Supported 00:07:52.760 Virtualization Management (1Ch): Supported 00:07:52.760 Doorbell Buffer Config (7Ch): Supported 00:07:52.760 Format NVM (80h): Supported LBA-Change 00:07:52.760 I/O Commands 00:07:52.760 ------------ 00:07:52.760 Flush (00h): Supported LBA-Change 00:07:52.760 Write (01h): Supported LBA-Change 00:07:52.760 Read (02h): Supported 00:07:52.760 Compare (05h): Supported 00:07:52.760 Write Zeroes (08h): Supported LBA-Change 00:07:52.760 Dataset Management (09h): Supported LBA-Change 00:07:52.760 Unknown (0Ch): Supported 00:07:52.760 Unknown (12h): Supported 00:07:52.760 Copy (19h): Supported LBA-Change 00:07:52.760 Unknown (1Dh): Supported LBA-Change 00:07:52.760 00:07:52.760 Error Log 00:07:52.760 ========= 00:07:52.760 00:07:52.760 Arbitration 00:07:52.760 =========== 00:07:52.760 Arbitration Burst: no limit 00:07:52.760 00:07:52.760 Power Management 00:07:52.760 ================ 00:07:52.760 Number of Power States: 1 00:07:52.760 Current Power State: Power State #0 00:07:52.760 Power State #0: 00:07:52.760 Max Power: 25.00 W 00:07:52.760 Non-Operational State: Operational 00:07:52.760 Entry Latency: 16 microseconds 00:07:52.760 Exit Latency: 4 microseconds 00:07:52.760 Relative Read Throughput: 0 00:07:52.760 Relative Read Latency: 0 00:07:52.760 Relative Write Throughput: 0 00:07:52.760 Relative Write Latency: 0 00:07:52.760 Idle Power: Not Reported 00:07:52.760 Active Power: Not Reported 00:07:52.760 Non-Operational Permissive Mode: Not Supported 00:07:52.760 00:07:52.760 Health Information 00:07:52.760 ================== 00:07:52.760 Critical Warnings: 00:07:52.760 Available Spare Space: OK 00:07:52.760 Temperature: OK 00:07:52.760 Device Reliability: OK 00:07:52.760 Read Only: No 00:07:52.760 Volatile Memory Backup: OK 00:07:52.760 Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.761 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:52.761 Available Spare: 0% 00:07:52.761 Available Spare Threshold: 0% 00:07:52.761 Life Percentage Used: 0% 00:07:52.761 Data Units Read: 700 00:07:52.761 Data Units Written: 628 00:07:52.761 Host Read Commands: 39085 00:07:52.761 Host Write Commands: 38871 00:07:52.761 Controller Busy Time: 0 minutes 00:07:52.761 Power Cycles: 0 00:07:52.761 Power On Hours: 0 hours 00:07:52.761 Unsafe Shutdowns: 0 00:07:52.761 Unrecoverable Media Errors: 0 00:07:52.761 Lifetime Error Log Entries: 0 00:07:52.761 Warning Temperature Time: 0 minutes 00:07:52.761 Critical Temperature Time: 0 minutes 00:07:52.761 00:07:52.761 Number of Queues 00:07:52.761 ================ 00:07:52.761 Number of I/O Submission Queues: 64 00:07:52.761 Number of I/O Completion Queues: 64 00:07:52.761 00:07:52.761 ZNS Specific Controller Data 00:07:52.761 ============================ 00:07:52.761 Zone Append Size Limit: 0 00:07:52.761 00:07:52.761 00:07:52.761 Active Namespaces 00:07:52.761 ================= 00:07:52.761 Namespace ID:1 00:07:52.761 Error Recovery Timeout: Unlimited 00:07:52.761 Command Set Identifier: NVM (00h) 00:07:52.761 Deallocate: Supported 00:07:52.761 Deallocated/Unwritten Error: Supported 00:07:52.761 Deallocated Read Value: All 0x00 00:07:52.761 Deallocate in Write Zeroes: Not Supported 00:07:52.761 Deallocated Guard Field: 0xFFFF 00:07:52.761 Flush: Supported 00:07:52.761 Reservation: Not Supported 00:07:52.761 Metadata Transferred as: Separate Metadata Buffer 00:07:52.761 Namespace Sharing Capabilities: Private 00:07:52.761 Size (in LBAs): 1548666 (5GiB) 00:07:52.761 Capacity (in LBAs): 1548666 (5GiB) 00:07:52.761 Utilization (in LBAs): 1548666 (5GiB) 00:07:52.761 Thin Provisioning: Not Supported 00:07:52.761 Per-NS Atomic Units: No 00:07:52.761 Maximum Single Source Range Length: 128 00:07:52.761 Maximum Copy Length: 128 00:07:52.761 Maximum Source Range Count: 128 00:07:52.761 NGUID/EUI64 Never Reused: No 00:07:52.761 Namespace Write Protected: No 00:07:52.761 Number of LBA Formats: 8 00:07:52.761 Current LBA Format: LBA Format #07 00:07:52.761 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.761 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.761 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.761 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.761 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.761 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.761 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.761 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.761 00:07:52.761 NVM Specific Namespace Data 00:07:52.761 =========================== 00:07:52.761 Logical Block Storage Tag Mask: 0 00:07:52.761 Protection Information Capabilities: 00:07:52.761 16b Guard Protection Information Storage Tag Support: No 00:07:52.761 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.761 Storage Tag Check Read Support: No 00:07:52.761 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.761 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.761 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.761 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.761 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.761 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.761 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.761 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.761 ===================================================== 00:07:52.761 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:52.761 ===================================================== 00:07:52.761 Controller Capabilities/Features 00:07:52.761 ================================ 00:07:52.761 Vendor ID: 1b36 00:07:52.761 Subsystem Vendor ID: 1af4 00:07:52.761 Serial Number: 12341 00:07:52.761 Model Number: QEMU NVMe Ctrl 00:07:52.761 Firmware Version: 8.0.0 00:07:52.761 Recommended Arb Burst: 6 00:07:52.761 IEEE OUI Identifier: 00 54 52 00:07:52.761 Multi-path I/O 00:07:52.761 May have multiple subsystem ports: No 00:07:52.761 May have multiple controllers: No 00:07:52.761 Associated with SR-IOV VF: No 00:07:52.761 Max Data Transfer Size: 524288 00:07:52.761 Max Number of Namespaces: 256 00:07:52.761 Max Number of I/O Queues: 64 00:07:52.761 NVMe Specification Version (VS): 1.4 00:07:52.761 NVMe Specification Version (Identify): 1.4 00:07:52.761 Maximum Queue Entries: 2048 00:07:52.761 Contiguous Queues Required: Yes 00:07:52.761 Arbitration Mechanisms Supported 00:07:52.761 Weighted Round Robin: Not Supported 00:07:52.761 Vendor Specific: Not Supported 00:07:52.761 Reset Timeout: 7500 ms 00:07:52.761 Doorbell Stride: 4 bytes 00:07:52.761 NVM Subsystem Reset: Not Supported 00:07:52.761 Command Sets Supported 00:07:52.761 NVM Command Set: Supported 00:07:52.761 Boot Partition: Not Supported 00:07:52.761 Memory Page Size Minimum: 4096 bytes 00:07:52.761 Memory Page Size Maximum: 65536 bytes 00:07:52.761 Persistent Memory Region: Not Supported 00:07:52.761 Optional Asynchronous Events Supported 00:07:52.761 Namespace Attribute Notices: Supported 00:07:52.761 Firmware Activation Notices: Not Supported 00:07:52.761 ANA Change Notices: Not Supported 00:07:52.761 PLE Aggregate Log Change Notices: Not Supported 00:07:52.761 LBA Status Info Alert Notices: Not Supported 00:07:52.761 EGE Aggregate Log Change Notices: Not Supported 00:07:52.761 Normal NVM Subsystem Shutdown event: Not Supported 00:07:52.761 Zone Descriptor Change Notices: Not Supported 00:07:52.761 Discovery Log Change Notices: Not Supported 00:07:52.761 Controller Attributes 00:07:52.761 128-bit Host Identifier: Not Supported 00:07:52.761 Non-Operational Permissive Mode: Not Supported 00:07:52.761 NVM Sets: Not Supported 00:07:52.761 Read Recovery Levels: Not Supported 00:07:52.761 Endurance Groups: Not Supported 00:07:52.761 Predictable Latency Mode: Not Supported 00:07:52.761 Traffic Based Keep ALive: Not Supported 00:07:52.761 Namespace Granularity: Not Supported 00:07:52.761 SQ Associations: Not Supported 00:07:52.761 UUID List: Not Supported 00:07:52.761 Multi-Domain Subsystem: Not Supported 00:07:52.761 Fixed Capacity Management: Not Supported 00:07:52.761 Variable Capacity Management: Not Supported 00:07:52.761 Delete Endurance Group: Not Supported 00:07:52.761 Delete NVM Set: Not Supported 00:07:52.761 Extended LBA Formats Supported: Supported 00:07:52.761 Flexible Data Placement Supported: Not Supported 00:07:52.761 00:07:52.761 Controller Memory Buffer Support 00:07:52.761 ================================ 00:07:52.761 Supported: No 00:07:52.761 00:07:52.761 Persistent Memory Region Support 00:07:52.761 ================================ 00:07:52.761 Supported: No 00:07:52.761 00:07:52.761 Admin Command Set Attributes 00:07:52.761 ============================ 00:07:52.761 Security Send/Receive: Not Supported 00:07:52.761 Format NVM: Supported 00:07:52.761 Firmware Activate/Download: Not Supported 00:07:52.761 Namespace Management: Supported 00:07:52.761 Device Self-Test: Not Supported 00:07:52.761 Directives: Supported 00:07:52.761 NVMe-MI: Not Supported 00:07:52.761 Virtualization Management: Not Supported 00:07:52.761 Doorbell Buffer Config: Supported 00:07:52.761 Get LBA Status Capability: Not Supported 00:07:52.761 Command & Feature Lockdown Capability: Not Supported 00:07:52.761 Abort Command Limit: 4 00:07:52.761 Async Event Request Limit: 4 00:07:52.761 Number of Firmware Slots: N/A 00:07:52.761 Firmware Slot 1 Read-Only: N/A 00:07:52.761 Firmware Activation Without Reset: N/A 00:07:52.761 Multiple Update Detection Support: N/A 00:07:52.761 Firmware Update Granularity: No Information Provided 00:07:52.761 Per-Namespace SMART Log: Yes 00:07:52.761 Asymmetric Namespace Access Log Page: Not Supported 00:07:52.761 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:52.761 Command Effects Log Page: Supported 00:07:52.761 Get Log Page Extended Data: Supported 00:07:52.761 Telemetry Log Pages: Not Supported 00:07:52.761 Persistent Event Log Pages: Not Supported 00:07:52.761 Supported Log Pages Log Page: May Support 00:07:52.761 Commands Supported & Effects Log Page: Not Supported 00:07:52.761 Feature Identifiers & Effects Log Page:May Support 00:07:52.761 NVMe-MI Commands & Effects Log Page: May Support 00:07:52.761 Data Area 4 for Telemetry Log: Not Supported 00:07:52.761 Error Log Page Entries Supported: 1 00:07:52.761 Keep Alive: Not Supported 00:07:52.761 00:07:52.761 NVM Command Set Attributes 00:07:52.761 ========================== 00:07:52.761 Submission Queue Entry Size 00:07:52.761 Max: 64 00:07:52.761 Min: 64 00:07:52.761 Completion Queue Entry Size 00:07:52.761 Max: 16 00:07:52.761 Min: 16 00:07:52.761 Number of Namespaces: 256 00:07:52.761 Compare Command: Supported 00:07:52.761 Write Uncorrectable Command: Not Supported 00:07:52.761 Dataset Management Command: Supported 00:07:52.761 Write Zeroes Command: Supported 00:07:52.761 Set Features Save Field: Supported 00:07:52.761 Reservations: Not Supported 00:07:52.761 Timestamp: Supported 00:07:52.762 Copy: Supported 00:07:52.762 Volatile Write Cache: Present 00:07:52.762 Atomic Write Unit (Normal): 1 00:07:52.762 Atomic Write Unit (PFail): 1 00:07:52.762 Atomic Compare & Write Unit: 1 00:07:52.762 Fused Compare & Write: Not Supported 00:07:52.762 Scatter-Gather List 00:07:52.762 SGL Command Set: Supported 00:07:52.762 SGL Keyed: Not Supported 00:07:52.762 SGL Bit Bucket Descriptor: Not Supported 00:07:52.762 SGL Metadata Pointer: Not Supported 00:07:52.762 Oversized SGL: Not Supported 00:07:52.762 SGL Metadata Address: Not Supported 00:07:52.762 SGL Offset: Not Supported 00:07:52.762 Transport SGL Data Block: Not Supported 00:07:52.762 Replay Protected Memory Block: Not Supported 00:07:52.762 00:07:52.762 Firmware Slot Information 00:07:52.762 ========================= 00:07:52.762 Active slot: 1 00:07:52.762 Slot 1 Firmware Revision: 1.0 00:07:52.762 00:07:52.762 00:07:52.762 Commands Supported and Effects 00:07:52.762 ============================== 00:07:52.762 Admin Commands 00:07:52.762 -------------- 00:07:52.762 Delete I/O Submission Queue (00h): Supported 00:07:52.762 Create I/O Submission Queue (01h): Supported 00:07:52.762 Get Log Page (02h): Supported 00:07:52.762 Delete I/O Completion Queue (04h): Supported 00:07:52.762 Create I/O Completion Queue (05h): Supported 00:07:52.762 Identify (06h): Supported 00:07:52.762 Abort (08h): Supported 00:07:52.762 Set Features (09h): Supported 00:07:52.762 Get Features (0Ah): Supported 00:07:52.762 Asynchronous Event Request (0Ch): Supported 00:07:52.762 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:52.762 Directive Send (19h): Supported 00:07:52.762 Directive Receive (1Ah): Supported 00:07:52.762 Virtualization Management (1Ch): Supported 00:07:52.762 Doorbell Buffer Config (7Ch): Supported 00:07:52.762 Format NVM (80h): Supported LBA-Change 00:07:52.762 I/O Commands 00:07:52.762 ------------ 00:07:52.762 Flush (00h): Supported LBA-Change 00:07:52.762 Write (01h): Supported LBA-Change 00:07:52.762 Read (02h): Supported 00:07:52.762 Compare (05h): Supported 00:07:52.762 Write Zeroes (08h): Supported LBA-Change 00:07:52.762 Dataset Management (09h): Supported LBA-Change 00:07:52.762 Unknown (0Ch): Supported 00:07:52.762 Unknown (12h): Supported 00:07:52.762 Copy (19h): Supported LBA-Change 00:07:52.762 Unknown (1Dh): Supported LBA-Change 00:07:52.762 00:07:52.762 Error Log 00:07:52.762 ========= 00:07:52.762 00:07:52.762 Arbitration 00:07:52.762 =========== 00:07:52.762 Arbitration Burst: no limit 00:07:52.762 00:07:52.762 Power Management 00:07:52.762 ================ 00:07:52.762 Number of Power States: 1 00:07:52.762 Current Power State: Power State #0 00:07:52.762 Power State #0: 00:07:52.762 Max Power: 25.00 W 00:07:52.762 Non-Operational State: Operational 00:07:52.762 Entry Latency: 16 microseconds 00:07:52.762 Exit Latency: 4 microseconds 00:07:52.762 Relative Read Throughput: 0 00:07:52.762 Relative Read Latency: 0 00:07:52.762 Relative Write Throughput: 0 00:07:52.762 Relative Write Latency: 0 00:07:52.762 Idle Power: Not Reported 00:07:52.762 Active Power: Not Reported 00:07:52.762 Non-Operational Permissive Mode: Not Supported 00:07:52.762 00:07:52.762 Health Information 00:07:52.762 ================== 00:07:52.762 Critical Warnings: 00:07:52.762 Available Spare Space: OK 00:07:52.762 Temperature: OK 00:07:52.762 Device Reliability: OK 00:07:52.762 Read Only: No 00:07:52.762 Volatile Memory Backup: OK 00:07:52.762 Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.762 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:52.762 Available Spare: 0% 00:07:52.762 Available Spare Threshold: 0% 00:07:52.762 Life Percentage Used: 0% 00:07:52.762 Data Units Read: 1050 00:07:52.762 Data Units Written: 922 00:07:52.762 Host Read Commands: 56039 00:07:52.762 Host Write Commands: 54912 00:07:52.762 Controller Busy Time: 0 minutes 00:07:52.762 Power Cycles: 0 00:07:52.762 Power On Hours: 0 hours 00:07:52.762 Unsafe Shutdowns: 0 00:07:52.762 Unrecoverable Media Errors: 0 00:07:52.762 Lifetime Error Log Entries: 0 00:07:52.762 Warning Temperature Time: 0 minutes 00:07:52.762 Critical Temperature Time: 0 minutes 00:07:52.762 00:07:52.762 Number of Queues 00:07:52.762 ================ 00:07:52.762 Number of I/O Submission Queues: 64 00:07:52.762 Number of I/O Completion Queues: 64 00:07:52.762 00:07:52.762 ZNS Specific Controller Data 00:07:52.762 ============================ 00:07:52.762 Zone Append Size Limit: 0 00:07:52.762 00:07:52.762 00:07:52.762 Active Namespaces 00:07:52.762 ================= 00:07:52.762 Namespace ID:1 00:07:52.762 Error Recovery Timeout: Unlimited 00:07:52.762 Command Set Identifier: NVM (00h) 00:07:52.762 Deallocate: Supported 00:07:52.762 Deallocated/Unwritten Error: Supported 00:07:52.762 Deallocated Read Value: All 0x00 00:07:52.762 Deallocate in Write Zeroes: Not Supported 00:07:52.762 Deallocated Guard Field: 0xFFFF 00:07:52.762 Flush: Supported 00:07:52.762 Reservation: Not Supported 00:07:52.762 Namespace Sharing Capabilities: Private 00:07:52.762 Size (in LBAs): 1310720 (5GiB) 00:07:52.762 Capacity (in LBAs): 1310720 (5GiB) 00:07:52.762 Utilization (in LBAs): 1310720 (5GiB) 00:07:52.762 Thin Provisioning: Not Supported 00:07:52.762 Per-NS Atomic Units: No 00:07:52.762 Maximum Single Source Range Length: 128 00:07:52.762 Maximum Copy Length: 128 00:07:52.762 Maximum Source Range Count: 128 00:07:52.762 NGUID/EUI64 Never Reused: No 00:07:52.762 Namespace Write Protected: No 00:07:52.762 Number of LBA Formats: 8 00:07:52.762 Current LBA Format: LBA Format #04 00:07:52.762 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.762 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.762 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.762 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.762 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.762 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.762 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.762 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.762 00:07:52.762 NVM Specific Namespace Data 00:07:52.762 =========================== 00:07:52.762 Logical Block Storage Tag Mask: 0 00:07:52.762 Protection Information Capabilities: 00:07:52.762 16b Guard Protection Information Storage Tag Support: No 00:07:52.762 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.762 Storage Tag Check Read Support: No 00:07:52.762 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.762 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.762 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.762 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.762 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.762 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.762 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.762 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.762 ===================================================== 00:07:52.762 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:52.762 ===================================================== 00:07:52.762 Controller Capabilities/Features 00:07:52.762 ================================ 00:07:52.762 Vendor ID: 1b36 00:07:52.762 Subsystem Vendor ID: 1af4 00:07:52.762 Serial Number: 12342 00:07:52.762 Model Number: QEMU NVMe Ctrl 00:07:52.762 Firmware Version: 8.0.0 00:07:52.762 Recommended Arb Burst: 6 00:07:52.762 IEEE OUI Identifier: 00 54 52 00:07:52.762 Multi-path I/O 00:07:52.762 May have multiple subsystem ports: No 00:07:52.762 May have multiple controllers: No 00:07:52.762 Associated with SR-IOV VF: No 00:07:52.762 Max Data Transfer Size: 524288 00:07:52.762 Max Number of Namespaces: 256 00:07:52.762 Max Number of I/O Queues: 64 00:07:52.762 NVMe Specification Version (VS): 1.4 00:07:52.762 NVMe Specification Version (Identify): 1.4 00:07:52.762 Maximum Queue Entries: 2048 00:07:52.762 Contiguous Queues Required: Yes 00:07:52.762 Arbitration Mechanisms Supported 00:07:52.762 Weighted Round Robin: Not Supported 00:07:52.762 Vendor Specific: Not Supported 00:07:52.762 Reset Timeout: 7500 ms 00:07:52.762 Doorbell Stride: 4 bytes 00:07:52.762 NVM Subsystem Reset: Not Supported 00:07:52.762 Command Sets Supported 00:07:52.762 NVM Command Set: Supported 00:07:52.762 Boot Partition: Not Supported 00:07:52.762 Memory Page Size Minimum: 4096 bytes 00:07:52.763 Memory Page Size Maximum: 65536 bytes 00:07:52.763 Persistent Memory Region: Not Supported 00:07:52.763 Optional Asynchronous Events Supported 00:07:52.763 Namespace Attribute Notices: Supported 00:07:52.763 Firmware Activation Notices: Not Supported 00:07:52.763 ANA Change Notices: Not Supported 00:07:52.763 PLE Aggregate Log Change Notices: Not Supported 00:07:52.763 LBA Status Info Alert Notices: Not Supported 00:07:52.763 EGE Aggregate Log Change Notices: Not Supported 00:07:52.763 Normal NVM Subsystem Shutdown event: Not Supported 00:07:52.763 Zone Descriptor Change Notices: Not Supported 00:07:52.763 Discovery Log Change Notices: Not Supported 00:07:52.763 Controller Attributes 00:07:52.763 128-bit Host Identifier: Not Supported 00:07:52.763 Non-Operational Permissive Mode: Not Supported 00:07:52.763 NVM Sets: Not Supported 00:07:52.763 Read Recovery Levels: Not Supported 00:07:52.763 Endurance Groups: Not Supported 00:07:52.763 Predictable Latency Mode: Not Supported 00:07:52.763 Traffic Based Keep ALive: Not Supported 00:07:52.763 Namespace Granularity: Not Supported 00:07:52.763 SQ Associations: Not Supported 00:07:52.763 UUID List: Not Supported 00:07:52.763 Multi-Domain Subsystem: Not Supported 00:07:52.763 Fixed Capacity Management: Not Supported 00:07:52.763 Variable Capacity Management: Not Supported 00:07:52.763 Delete Endurance Group: Not Supported 00:07:52.763 Delete NVM Set: Not Supported 00:07:52.763 Extended LBA Formats Supported: Supported 00:07:52.763 Flexible Data Placement Supported: Not Supported 00:07:52.763 00:07:52.763 Controller Memory Buffer Support 00:07:52.763 ================================ 00:07:52.763 Supported: No 00:07:52.763 00:07:52.763 Persistent Memory Region Support 00:07:52.763 ================================ 00:07:52.763 Supported: No 00:07:52.763 00:07:52.763 Admin Command Set Attributes 00:07:52.763 ============================ 00:07:52.763 Security Send/Receive: Not Supported 00:07:52.763 Format NVM: Supported 00:07:52.763 Firmware Activate/Download: Not Supported 00:07:52.763 Namespace Management: Supported 00:07:52.763 Device Self-Test: Not Supported 00:07:52.763 Directives: Supported 00:07:52.763 NVMe-MI: Not Supported 00:07:52.763 Virtualization Management: Not Supported 00:07:52.763 Doorbell Buffer Config: Supported 00:07:52.763 Get LBA Status Capability: Not Supported 00:07:52.763 Command & Feature Lockdown Capability: Not Supported 00:07:52.763 Abort Command Limit: 4 00:07:52.763 Async Event Request Limit: 4 00:07:52.763 Number of Firmware Slots: N/A 00:07:52.763 Firmware Slot 1 Read-Only: N/A 00:07:52.763 Firmware Activation Without Reset: N/A 00:07:52.763 Multiple Update Detection Support: N/A 00:07:52.763 Firmware Update Granularity: No Information Provided 00:07:52.763 Per-Namespace SMART Log: Yes 00:07:52.763 Asymmetric Namespace Access Log Page: Not Supported 00:07:52.763 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:52.763 Command Effects Log Page: Supported 00:07:52.763 Get Log Page Extended Data: Supported 00:07:52.763 Telemetry Log Pages: Not Supported 00:07:52.763 Persistent Event Log Pages: Not Supported 00:07:52.763 Supported Log Pages Log Page: May Support 00:07:52.763 Commands Supported & Effects Log Page: Not Supported 00:07:52.763 Feature Identifiers & Effects Log Page:May Support 00:07:52.763 NVMe-MI Commands & Effects Log Page: May Support 00:07:52.763 Data Area 4 for Telemetry Log: Not Supported 00:07:52.763 Error Log Page Entries Supported: 1 00:07:52.763 Keep Alive: Not Supported 00:07:52.763 00:07:52.763 NVM Command Set Attributes 00:07:52.763 ========================== 00:07:52.763 Submission Queue Entry Size 00:07:52.763 Max: 64 00:07:52.763 Min: 64 00:07:52.763 Completion Queue Entry Size 00:07:52.763 Max: 16 00:07:52.763 Min: 16 00:07:52.763 Number of Namespaces: 256 00:07:52.763 Compare Command: Supported 00:07:52.763 Write Uncorrectable Command: Not Supported 00:07:52.763 Dataset Management Command: Supported 00:07:52.763 Write Zeroes Command: Supported 00:07:52.763 Set Features Save Field: Supported 00:07:52.763 Reservations: Not Supported 00:07:52.763 Timestamp: Supported 00:07:52.763 Copy: Supported 00:07:52.763 Volatile Write Cache: Present 00:07:52.763 Atomic Write Unit (Normal): 1 00:07:52.763 Atomic Write Unit (PFail): 1 00:07:52.763 Atomic Compare & Write Unit: 1 00:07:52.763 Fused Compare & Write: Not Supported 00:07:52.763 Scatter-Gather List 00:07:52.763 SGL Command Set: Supported 00:07:52.763 SGL Keyed: Not Supported 00:07:52.763 SGL Bit Bucket Descriptor: Not Supported 00:07:52.763 SGL Metadata Pointer: Not Supported 00:07:52.763 Oversized SGL: Not Supported 00:07:52.763 SGL Metadata Address: Not Supported 00:07:52.763 SGL Offset: Not Supported 00:07:52.763 Transport SGL Data Block: Not Supported 00:07:52.763 Replay Protected Memory Block: Not Supported 00:07:52.763 00:07:52.763 Firmware Slot Information 00:07:52.763 ========================= 00:07:52.763 Active slot: 1 00:07:52.763 Slot 1 Firmware Revision: 1.0 00:07:52.763 00:07:52.763 00:07:52.763 Commands Supported and Effects 00:07:52.763 ============================== 00:07:52.763 Admin Commands 00:07:52.763 -------------- 00:07:52.763 Delete I/O Submission Queue (00h): Supported 00:07:52.763 Create I/O Submission Queue (01h): Supported 00:07:52.763 Get Log Page (02h): Supported 00:07:52.763 Delete I/O Completion Queue (04h): Supported 00:07:52.763 Create I/O Completion Queue (05h): Supported 00:07:52.763 Identify (06h): Supported 00:07:52.763 Abort (08h): Supported 00:07:52.763 Set Features (09h): Supported 00:07:52.763 Get Features (0Ah): Supported 00:07:52.763 Asynchronous Event Request (0Ch): Supported 00:07:52.763 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:52.763 Directive Send (19h): Supported 00:07:52.763 Directive Receive (1Ah): Supported 00:07:52.763 Virtualization Management (1Ch): Supported 00:07:52.763 Doorbell Buffer Config (7Ch): Supported 00:07:52.763 Format NVM (80h): Supported LBA-Change 00:07:52.763 I/O Commands 00:07:52.763 ------------ 00:07:52.763 Flush (00h): Supported LBA-Change 00:07:52.763 Write (01h): Supported LBA-Change 00:07:52.763 Read (02h): Supported 00:07:52.763 Compare (05h): Supported 00:07:52.763 Write Zeroes (08h): Supported LBA-Change 00:07:52.763 Dataset Management (09h): Supported LBA-Change 00:07:52.763 Unknown (0Ch): Supported 00:07:52.763 Unknown (12h): Supported 00:07:52.763 Copy (19h): Supported LBA-Change 00:07:52.763 Unknown (1Dh): Supported LBA-Change 00:07:52.763 00:07:52.763 Error Log 00:07:52.763 ========= 00:07:52.763 00:07:52.763 Arbitration 00:07:52.763 =========== 00:07:52.763 Arbitration Burst: no limit 00:07:52.763 00:07:52.763 Power Management 00:07:52.763 ================ 00:07:52.763 Number of Power States: 1 00:07:52.763 Current Power State: Power State #0 00:07:52.763 Power State #0: 00:07:52.763 Max Power: 25.00 W 00:07:52.763 Non-Operational State: Operational 00:07:52.763 Entry Latency: 16 microseconds 00:07:52.763 Exit Latency: 4 microseconds 00:07:52.763 Relative Read Throughput: 0 00:07:52.763 Relative Read Latency: 0 00:07:52.763 Relative Write Throughput: 0 00:07:52.763 Relative Write Latency: 0 00:07:52.764 Idle Power: Not Reported 00:07:52.764 Active Power: Not Reported 00:07:52.764 Non-Operational Permissive Mode: Not Supported 00:07:52.764 00:07:52.764 Health Information 00:07:52.764 ================== 00:07:52.764 Critical Warnings: 00:07:52.764 Available Spare Space: OK 00:07:52.764 Temperature: OK 00:07:52.764 Device Reliability: OK 00:07:52.764 Read Only: No 00:07:52.764 Volatile Memory Backup: OK 00:07:52.764 Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.764 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:52.764 Available Spare: 0% 00:07:52.764 Available Spare Threshold: 0% 00:07:52.764 Life Percentage Used: 0% 00:07:52.764 Data Units Read: 2260 00:07:52.764 Data Units Written: 2048 00:07:52.764 Host Read Commands: 119480 00:07:52.764 Host Write Commands: 117749 00:07:52.764 Controller Busy Time: 0 minutes 00:07:52.764 Power Cycles: 0 00:07:52.764 Power On Hours: 0 hours 00:07:52.764 Unsafe Shutdowns: 0 00:07:52.764 Unrecoverable Media Errors: 0 00:07:52.764 Lifetime Error Log Entries: 0 00:07:52.764 Warning Temperature Time: 0 minutes 00:07:52.764 Critical Temperature Time: 0 minutes 00:07:52.764 00:07:52.764 Number of Queues 00:07:52.764 ================ 00:07:52.764 Number of I/O Submission Queues: 64 00:07:52.764 Number of I/O Completion Queues: 64 00:07:52.764 00:07:52.764 ZNS Specific Controller Data 00:07:52.764 ============================ 00:07:52.764 Zone Append Size Limit: 0 00:07:52.764 00:07:52.764 00:07:52.764 Active Namespaces 00:07:52.764 ================= 00:07:52.764 Namespace ID:1 00:07:52.764 Error Recovery Timeout: Unlimited 00:07:52.764 Command Set Identifier: NVM (00h) 00:07:52.764 Deallocate: Supported 00:07:52.764 Deallocated/Unwritten Error: Supported 00:07:52.764 Deallocated Read Value: All 0x00 00:07:52.764 Deallocate in Write Zeroes: Not Supported 00:07:52.764 Deallocated Guard Field: 0xFFFF 00:07:52.764 Flush: Supported 00:07:52.764 Reservation: Not Supported 00:07:52.764 Namespace Sharing Capabilities: Private 00:07:52.764 Size (in LBAs): 1048576 (4GiB) 00:07:52.764 Capacity (in LBAs): 1048576 (4GiB) 00:07:52.764 Utilization (in LBAs): 1048576 (4GiB) 00:07:52.764 Thin Provisioning: Not Supported 00:07:52.764 Per-NS Atomic Units: No 00:07:52.764 Maximum Single Source Range Length: 128 00:07:52.764 Maximum Copy Length: 128 00:07:52.764 Maximum Source Range Count: 128 00:07:52.764 NGUID/EUI64 Never Reused: No 00:07:52.764 Namespace Write Protected: No 00:07:52.764 Number of LBA Formats: 8 00:07:52.764 Current LBA Format: LBA Format #04 00:07:52.764 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.764 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.764 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.764 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.764 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.764 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.764 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.764 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.764 00:07:52.764 NVM Specific Namespace Data 00:07:52.764 =========================== 00:07:52.764 Logical Block Storage Tag Mask: 0 00:07:52.764 Protection Information Capabilities: 00:07:52.764 16b Guard Protection Information Storage Tag Support: No 00:07:52.764 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.764 Storage Tag Check Read Support: No 00:07:52.764 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Namespace ID:2 00:07:52.764 Error Recovery Timeout: Unlimited 00:07:52.764 Command Set Identifier: NVM (00h) 00:07:52.764 Deallocate: Supported 00:07:52.764 Deallocated/Unwritten Error: Supported 00:07:52.764 Deallocated Read Value: All 0x00 00:07:52.764 Deallocate in Write Zeroes: Not Supported 00:07:52.764 Deallocated Guard Field: 0xFFFF 00:07:52.764 Flush: Supported 00:07:52.764 Reservation: Not Supported 00:07:52.764 Namespace Sharing Capabilities: Private 00:07:52.764 Size (in LBAs): 1048576 (4GiB) 00:07:52.764 Capacity (in LBAs): 1048576 (4GiB) 00:07:52.764 Utilization (in LBAs): 1048576 (4GiB) 00:07:52.764 Thin Provisioning: Not Supported 00:07:52.764 Per-NS Atomic Units: No 00:07:52.764 Maximum Single Source Range Length: 128 00:07:52.764 Maximum Copy Length: 128 00:07:52.764 Maximum Source Range Count: 128 00:07:52.764 NGUID/EUI64 Never Reused: No 00:07:52.764 Namespace Write Protected: No 00:07:52.764 Number of LBA Formats: 8 00:07:52.764 Current LBA Format: LBA Format #04 00:07:52.764 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.764 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.764 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.764 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.764 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.764 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.764 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.764 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.764 00:07:52.764 NVM Specific Namespace Data 00:07:52.764 =========================== 00:07:52.764 Logical Block Storage Tag Mask: 0 00:07:52.764 Protection Information Capabilities: 00:07:52.764 16b Guard Protection Information Storage Tag Support: No 00:07:52.764 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.764 Storage Tag Check Read Support: No 00:07:52.764 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Namespace ID:3 00:07:52.764 Error Recovery Timeout: Unlimited 00:07:52.764 Command Set Identifier: NVM (00h) 00:07:52.764 Deallocate: Supported 00:07:52.764 Deallocated/Unwritten Error: Supported 00:07:52.764 Deallocated Read Value: All 0x00 00:07:52.764 Deallocate in Write Zeroes: Not Supported 00:07:52.764 Deallocated Guard Field: 0xFFFF 00:07:52.764 Flush: Supported 00:07:52.764 Reservation: Not Supported 00:07:52.764 Namespace Sharing Capabilities: Private 00:07:52.764 Size (in LBAs): 1048576 (4GiB) 00:07:52.764 Capacity (in LBAs): [2024-12-14 01:07:26.225544] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 76089 terminated unexpected 00:07:52.764 [2024-12-14 01:07:26.226109] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 76089 terminated unexpected 00:07:52.764 1048576 (4GiB) 00:07:52.764 Utilization (in LBAs): 1048576 (4GiB) 00:07:52.764 Thin Provisioning: Not Supported 00:07:52.764 Per-NS Atomic Units: No 00:07:52.764 Maximum Single Source Range Length: 128 00:07:52.764 Maximum Copy Length: 128 00:07:52.764 Maximum Source Range Count: 128 00:07:52.764 NGUID/EUI64 Never Reused: No 00:07:52.764 Namespace Write Protected: No 00:07:52.764 Number of LBA Formats: 8 00:07:52.764 Current LBA Format: LBA Format #04 00:07:52.764 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.764 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.764 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.764 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.764 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.764 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.764 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.764 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.764 00:07:52.764 NVM Specific Namespace Data 00:07:52.764 =========================== 00:07:52.764 Logical Block Storage Tag Mask: 0 00:07:52.764 Protection Information Capabilities: 00:07:52.764 16b Guard Protection Information Storage Tag Support: No 00:07:52.764 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.764 Storage Tag Check Read Support: No 00:07:52.764 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.764 01:07:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:52.765 01:07:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:53.023 ===================================================== 00:07:53.023 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:53.023 ===================================================== 00:07:53.023 Controller Capabilities/Features 00:07:53.023 ================================ 00:07:53.023 Vendor ID: 1b36 00:07:53.023 Subsystem Vendor ID: 1af4 00:07:53.023 Serial Number: 12340 00:07:53.023 Model Number: QEMU NVMe Ctrl 00:07:53.023 Firmware Version: 8.0.0 00:07:53.023 Recommended Arb Burst: 6 00:07:53.023 IEEE OUI Identifier: 00 54 52 00:07:53.023 Multi-path I/O 00:07:53.023 May have multiple subsystem ports: No 00:07:53.023 May have multiple controllers: No 00:07:53.023 Associated with SR-IOV VF: No 00:07:53.023 Max Data Transfer Size: 524288 00:07:53.023 Max Number of Namespaces: 256 00:07:53.023 Max Number of I/O Queues: 64 00:07:53.023 NVMe Specification Version (VS): 1.4 00:07:53.023 NVMe Specification Version (Identify): 1.4 00:07:53.023 Maximum Queue Entries: 2048 00:07:53.023 Contiguous Queues Required: Yes 00:07:53.024 Arbitration Mechanisms Supported 00:07:53.024 Weighted Round Robin: Not Supported 00:07:53.024 Vendor Specific: Not Supported 00:07:53.024 Reset Timeout: 7500 ms 00:07:53.024 Doorbell Stride: 4 bytes 00:07:53.024 NVM Subsystem Reset: Not Supported 00:07:53.024 Command Sets Supported 00:07:53.024 NVM Command Set: Supported 00:07:53.024 Boot Partition: Not Supported 00:07:53.024 Memory Page Size Minimum: 4096 bytes 00:07:53.024 Memory Page Size Maximum: 65536 bytes 00:07:53.024 Persistent Memory Region: Not Supported 00:07:53.024 Optional Asynchronous Events Supported 00:07:53.024 Namespace Attribute Notices: Supported 00:07:53.024 Firmware Activation Notices: Not Supported 00:07:53.024 ANA Change Notices: Not Supported 00:07:53.024 PLE Aggregate Log Change Notices: Not Supported 00:07:53.024 LBA Status Info Alert Notices: Not Supported 00:07:53.024 EGE Aggregate Log Change Notices: Not Supported 00:07:53.024 Normal NVM Subsystem Shutdown event: Not Supported 00:07:53.024 Zone Descriptor Change Notices: Not Supported 00:07:53.024 Discovery Log Change Notices: Not Supported 00:07:53.024 Controller Attributes 00:07:53.024 128-bit Host Identifier: Not Supported 00:07:53.024 Non-Operational Permissive Mode: Not Supported 00:07:53.024 NVM Sets: Not Supported 00:07:53.024 Read Recovery Levels: Not Supported 00:07:53.024 Endurance Groups: Not Supported 00:07:53.024 Predictable Latency Mode: Not Supported 00:07:53.024 Traffic Based Keep ALive: Not Supported 00:07:53.024 Namespace Granularity: Not Supported 00:07:53.024 SQ Associations: Not Supported 00:07:53.024 UUID List: Not Supported 00:07:53.024 Multi-Domain Subsystem: Not Supported 00:07:53.024 Fixed Capacity Management: Not Supported 00:07:53.024 Variable Capacity Management: Not Supported 00:07:53.024 Delete Endurance Group: Not Supported 00:07:53.024 Delete NVM Set: Not Supported 00:07:53.024 Extended LBA Formats Supported: Supported 00:07:53.024 Flexible Data Placement Supported: Not Supported 00:07:53.024 00:07:53.024 Controller Memory Buffer Support 00:07:53.024 ================================ 00:07:53.024 Supported: No 00:07:53.024 00:07:53.024 Persistent Memory Region Support 00:07:53.024 ================================ 00:07:53.024 Supported: No 00:07:53.024 00:07:53.024 Admin Command Set Attributes 00:07:53.024 ============================ 00:07:53.024 Security Send/Receive: Not Supported 00:07:53.024 Format NVM: Supported 00:07:53.024 Firmware Activate/Download: Not Supported 00:07:53.024 Namespace Management: Supported 00:07:53.024 Device Self-Test: Not Supported 00:07:53.024 Directives: Supported 00:07:53.024 NVMe-MI: Not Supported 00:07:53.024 Virtualization Management: Not Supported 00:07:53.024 Doorbell Buffer Config: Supported 00:07:53.024 Get LBA Status Capability: Not Supported 00:07:53.024 Command & Feature Lockdown Capability: Not Supported 00:07:53.024 Abort Command Limit: 4 00:07:53.024 Async Event Request Limit: 4 00:07:53.024 Number of Firmware Slots: N/A 00:07:53.024 Firmware Slot 1 Read-Only: N/A 00:07:53.024 Firmware Activation Without Reset: N/A 00:07:53.024 Multiple Update Detection Support: N/A 00:07:53.024 Firmware Update Granularity: No Information Provided 00:07:53.024 Per-Namespace SMART Log: Yes 00:07:53.024 Asymmetric Namespace Access Log Page: Not Supported 00:07:53.024 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:53.024 Command Effects Log Page: Supported 00:07:53.024 Get Log Page Extended Data: Supported 00:07:53.024 Telemetry Log Pages: Not Supported 00:07:53.024 Persistent Event Log Pages: Not Supported 00:07:53.024 Supported Log Pages Log Page: May Support 00:07:53.024 Commands Supported & Effects Log Page: Not Supported 00:07:53.024 Feature Identifiers & Effects Log Page:May Support 00:07:53.024 NVMe-MI Commands & Effects Log Page: May Support 00:07:53.024 Data Area 4 for Telemetry Log: Not Supported 00:07:53.024 Error Log Page Entries Supported: 1 00:07:53.024 Keep Alive: Not Supported 00:07:53.024 00:07:53.024 NVM Command Set Attributes 00:07:53.024 ========================== 00:07:53.024 Submission Queue Entry Size 00:07:53.024 Max: 64 00:07:53.024 Min: 64 00:07:53.024 Completion Queue Entry Size 00:07:53.024 Max: 16 00:07:53.024 Min: 16 00:07:53.024 Number of Namespaces: 256 00:07:53.024 Compare Command: Supported 00:07:53.024 Write Uncorrectable Command: Not Supported 00:07:53.024 Dataset Management Command: Supported 00:07:53.024 Write Zeroes Command: Supported 00:07:53.024 Set Features Save Field: Supported 00:07:53.024 Reservations: Not Supported 00:07:53.024 Timestamp: Supported 00:07:53.024 Copy: Supported 00:07:53.024 Volatile Write Cache: Present 00:07:53.024 Atomic Write Unit (Normal): 1 00:07:53.024 Atomic Write Unit (PFail): 1 00:07:53.024 Atomic Compare & Write Unit: 1 00:07:53.024 Fused Compare & Write: Not Supported 00:07:53.024 Scatter-Gather List 00:07:53.024 SGL Command Set: Supported 00:07:53.024 SGL Keyed: Not Supported 00:07:53.024 SGL Bit Bucket Descriptor: Not Supported 00:07:53.024 SGL Metadata Pointer: Not Supported 00:07:53.024 Oversized SGL: Not Supported 00:07:53.024 SGL Metadata Address: Not Supported 00:07:53.024 SGL Offset: Not Supported 00:07:53.024 Transport SGL Data Block: Not Supported 00:07:53.024 Replay Protected Memory Block: Not Supported 00:07:53.024 00:07:53.024 Firmware Slot Information 00:07:53.024 ========================= 00:07:53.024 Active slot: 1 00:07:53.024 Slot 1 Firmware Revision: 1.0 00:07:53.024 00:07:53.024 00:07:53.024 Commands Supported and Effects 00:07:53.024 ============================== 00:07:53.024 Admin Commands 00:07:53.024 -------------- 00:07:53.024 Delete I/O Submission Queue (00h): Supported 00:07:53.024 Create I/O Submission Queue (01h): Supported 00:07:53.024 Get Log Page (02h): Supported 00:07:53.024 Delete I/O Completion Queue (04h): Supported 00:07:53.024 Create I/O Completion Queue (05h): Supported 00:07:53.024 Identify (06h): Supported 00:07:53.024 Abort (08h): Supported 00:07:53.024 Set Features (09h): Supported 00:07:53.024 Get Features (0Ah): Supported 00:07:53.024 Asynchronous Event Request (0Ch): Supported 00:07:53.024 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:53.024 Directive Send (19h): Supported 00:07:53.024 Directive Receive (1Ah): Supported 00:07:53.024 Virtualization Management (1Ch): Supported 00:07:53.024 Doorbell Buffer Config (7Ch): Supported 00:07:53.024 Format NVM (80h): Supported LBA-Change 00:07:53.024 I/O Commands 00:07:53.024 ------------ 00:07:53.024 Flush (00h): Supported LBA-Change 00:07:53.024 Write (01h): Supported LBA-Change 00:07:53.024 Read (02h): Supported 00:07:53.024 Compare (05h): Supported 00:07:53.024 Write Zeroes (08h): Supported LBA-Change 00:07:53.024 Dataset Management (09h): Supported LBA-Change 00:07:53.024 Unknown (0Ch): Supported 00:07:53.024 Unknown (12h): Supported 00:07:53.024 Copy (19h): Supported LBA-Change 00:07:53.024 Unknown (1Dh): Supported LBA-Change 00:07:53.024 00:07:53.024 Error Log 00:07:53.024 ========= 00:07:53.024 00:07:53.024 Arbitration 00:07:53.024 =========== 00:07:53.024 Arbitration Burst: no limit 00:07:53.024 00:07:53.024 Power Management 00:07:53.024 ================ 00:07:53.024 Number of Power States: 1 00:07:53.024 Current Power State: Power State #0 00:07:53.024 Power State #0: 00:07:53.024 Max Power: 25.00 W 00:07:53.024 Non-Operational State: Operational 00:07:53.024 Entry Latency: 16 microseconds 00:07:53.024 Exit Latency: 4 microseconds 00:07:53.024 Relative Read Throughput: 0 00:07:53.024 Relative Read Latency: 0 00:07:53.024 Relative Write Throughput: 0 00:07:53.024 Relative Write Latency: 0 00:07:53.024 Idle Power: Not Reported 00:07:53.024 Active Power: Not Reported 00:07:53.024 Non-Operational Permissive Mode: Not Supported 00:07:53.024 00:07:53.024 Health Information 00:07:53.024 ================== 00:07:53.024 Critical Warnings: 00:07:53.024 Available Spare Space: OK 00:07:53.024 Temperature: OK 00:07:53.024 Device Reliability: OK 00:07:53.024 Read Only: No 00:07:53.024 Volatile Memory Backup: OK 00:07:53.024 Current Temperature: 323 Kelvin (50 Celsius) 00:07:53.024 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:53.024 Available Spare: 0% 00:07:53.024 Available Spare Threshold: 0% 00:07:53.024 Life Percentage Used: 0% 00:07:53.024 Data Units Read: 700 00:07:53.024 Data Units Written: 628 00:07:53.024 Host Read Commands: 39085 00:07:53.024 Host Write Commands: 38871 00:07:53.024 Controller Busy Time: 0 minutes 00:07:53.024 Power Cycles: 0 00:07:53.024 Power On Hours: 0 hours 00:07:53.024 Unsafe Shutdowns: 0 00:07:53.024 Unrecoverable Media Errors: 0 00:07:53.024 Lifetime Error Log Entries: 0 00:07:53.024 Warning Temperature Time: 0 minutes 00:07:53.024 Critical Temperature Time: 0 minutes 00:07:53.024 00:07:53.024 Number of Queues 00:07:53.024 ================ 00:07:53.024 Number of I/O Submission Queues: 64 00:07:53.024 Number of I/O Completion Queues: 64 00:07:53.024 00:07:53.024 ZNS Specific Controller Data 00:07:53.024 ============================ 00:07:53.024 Zone Append Size Limit: 0 00:07:53.024 00:07:53.024 00:07:53.024 Active Namespaces 00:07:53.024 ================= 00:07:53.024 Namespace ID:1 00:07:53.024 Error Recovery Timeout: Unlimited 00:07:53.024 Command Set Identifier: NVM (00h) 00:07:53.024 Deallocate: Supported 00:07:53.024 Deallocated/Unwritten Error: Supported 00:07:53.024 Deallocated Read Value: All 0x00 00:07:53.024 Deallocate in Write Zeroes: Not Supported 00:07:53.024 Deallocated Guard Field: 0xFFFF 00:07:53.024 Flush: Supported 00:07:53.024 Reservation: Not Supported 00:07:53.024 Metadata Transferred as: Separate Metadata Buffer 00:07:53.024 Namespace Sharing Capabilities: Private 00:07:53.025 Size (in LBAs): 1548666 (5GiB) 00:07:53.025 Capacity (in LBAs): 1548666 (5GiB) 00:07:53.025 Utilization (in LBAs): 1548666 (5GiB) 00:07:53.025 Thin Provisioning: Not Supported 00:07:53.025 Per-NS Atomic Units: No 00:07:53.025 Maximum Single Source Range Length: 128 00:07:53.025 Maximum Copy Length: 128 00:07:53.025 Maximum Source Range Count: 128 00:07:53.025 NGUID/EUI64 Never Reused: No 00:07:53.025 Namespace Write Protected: No 00:07:53.025 Number of LBA Formats: 8 00:07:53.025 Current LBA Format: LBA Format #07 00:07:53.025 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:53.025 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:53.025 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:53.025 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:53.025 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:53.025 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:53.025 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:53.025 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:53.025 00:07:53.025 NVM Specific Namespace Data 00:07:53.025 =========================== 00:07:53.025 Logical Block Storage Tag Mask: 0 00:07:53.025 Protection Information Capabilities: 00:07:53.025 16b Guard Protection Information Storage Tag Support: No 00:07:53.025 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:53.025 Storage Tag Check Read Support: No 00:07:53.025 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.025 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.025 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.025 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.025 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.025 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.025 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.025 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.025 01:07:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:53.025 01:07:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:53.025 ===================================================== 00:07:53.025 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:53.025 ===================================================== 00:07:53.025 Controller Capabilities/Features 00:07:53.025 ================================ 00:07:53.025 Vendor ID: 1b36 00:07:53.025 Subsystem Vendor ID: 1af4 00:07:53.025 Serial Number: 12341 00:07:53.025 Model Number: QEMU NVMe Ctrl 00:07:53.025 Firmware Version: 8.0.0 00:07:53.025 Recommended Arb Burst: 6 00:07:53.025 IEEE OUI Identifier: 00 54 52 00:07:53.025 Multi-path I/O 00:07:53.025 May have multiple subsystem ports: No 00:07:53.025 May have multiple controllers: No 00:07:53.025 Associated with SR-IOV VF: No 00:07:53.025 Max Data Transfer Size: 524288 00:07:53.025 Max Number of Namespaces: 256 00:07:53.025 Max Number of I/O Queues: 64 00:07:53.025 NVMe Specification Version (VS): 1.4 00:07:53.025 NVMe Specification Version (Identify): 1.4 00:07:53.025 Maximum Queue Entries: 2048 00:07:53.025 Contiguous Queues Required: Yes 00:07:53.025 Arbitration Mechanisms Supported 00:07:53.025 Weighted Round Robin: Not Supported 00:07:53.025 Vendor Specific: Not Supported 00:07:53.025 Reset Timeout: 7500 ms 00:07:53.025 Doorbell Stride: 4 bytes 00:07:53.025 NVM Subsystem Reset: Not Supported 00:07:53.025 Command Sets Supported 00:07:53.025 NVM Command Set: Supported 00:07:53.025 Boot Partition: Not Supported 00:07:53.025 Memory Page Size Minimum: 4096 bytes 00:07:53.025 Memory Page Size Maximum: 65536 bytes 00:07:53.025 Persistent Memory Region: Not Supported 00:07:53.025 Optional Asynchronous Events Supported 00:07:53.025 Namespace Attribute Notices: Supported 00:07:53.025 Firmware Activation Notices: Not Supported 00:07:53.025 ANA Change Notices: Not Supported 00:07:53.025 PLE Aggregate Log Change Notices: Not Supported 00:07:53.025 LBA Status Info Alert Notices: Not Supported 00:07:53.025 EGE Aggregate Log Change Notices: Not Supported 00:07:53.025 Normal NVM Subsystem Shutdown event: Not Supported 00:07:53.025 Zone Descriptor Change Notices: Not Supported 00:07:53.025 Discovery Log Change Notices: Not Supported 00:07:53.025 Controller Attributes 00:07:53.025 128-bit Host Identifier: Not Supported 00:07:53.025 Non-Operational Permissive Mode: Not Supported 00:07:53.025 NVM Sets: Not Supported 00:07:53.025 Read Recovery Levels: Not Supported 00:07:53.025 Endurance Groups: Not Supported 00:07:53.025 Predictable Latency Mode: Not Supported 00:07:53.025 Traffic Based Keep ALive: Not Supported 00:07:53.025 Namespace Granularity: Not Supported 00:07:53.025 SQ Associations: Not Supported 00:07:53.025 UUID List: Not Supported 00:07:53.025 Multi-Domain Subsystem: Not Supported 00:07:53.025 Fixed Capacity Management: Not Supported 00:07:53.025 Variable Capacity Management: Not Supported 00:07:53.025 Delete Endurance Group: Not Supported 00:07:53.025 Delete NVM Set: Not Supported 00:07:53.025 Extended LBA Formats Supported: Supported 00:07:53.025 Flexible Data Placement Supported: Not Supported 00:07:53.025 00:07:53.025 Controller Memory Buffer Support 00:07:53.025 ================================ 00:07:53.025 Supported: No 00:07:53.025 00:07:53.025 Persistent Memory Region Support 00:07:53.025 ================================ 00:07:53.025 Supported: No 00:07:53.025 00:07:53.025 Admin Command Set Attributes 00:07:53.025 ============================ 00:07:53.025 Security Send/Receive: Not Supported 00:07:53.025 Format NVM: Supported 00:07:53.025 Firmware Activate/Download: Not Supported 00:07:53.025 Namespace Management: Supported 00:07:53.025 Device Self-Test: Not Supported 00:07:53.025 Directives: Supported 00:07:53.025 NVMe-MI: Not Supported 00:07:53.025 Virtualization Management: Not Supported 00:07:53.025 Doorbell Buffer Config: Supported 00:07:53.025 Get LBA Status Capability: Not Supported 00:07:53.025 Command & Feature Lockdown Capability: Not Supported 00:07:53.025 Abort Command Limit: 4 00:07:53.025 Async Event Request Limit: 4 00:07:53.025 Number of Firmware Slots: N/A 00:07:53.025 Firmware Slot 1 Read-Only: N/A 00:07:53.025 Firmware Activation Without Reset: N/A 00:07:53.025 Multiple Update Detection Support: N/A 00:07:53.025 Firmware Update Granularity: No Information Provided 00:07:53.025 Per-Namespace SMART Log: Yes 00:07:53.025 Asymmetric Namespace Access Log Page: Not Supported 00:07:53.025 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:53.025 Command Effects Log Page: Supported 00:07:53.025 Get Log Page Extended Data: Supported 00:07:53.025 Telemetry Log Pages: Not Supported 00:07:53.025 Persistent Event Log Pages: Not Supported 00:07:53.025 Supported Log Pages Log Page: May Support 00:07:53.025 Commands Supported & Effects Log Page: Not Supported 00:07:53.025 Feature Identifiers & Effects Log Page:May Support 00:07:53.025 NVMe-MI Commands & Effects Log Page: May Support 00:07:53.025 Data Area 4 for Telemetry Log: Not Supported 00:07:53.025 Error Log Page Entries Supported: 1 00:07:53.025 Keep Alive: Not Supported 00:07:53.025 00:07:53.025 NVM Command Set Attributes 00:07:53.025 ========================== 00:07:53.025 Submission Queue Entry Size 00:07:53.025 Max: 64 00:07:53.025 Min: 64 00:07:53.025 Completion Queue Entry Size 00:07:53.025 Max: 16 00:07:53.025 Min: 16 00:07:53.025 Number of Namespaces: 256 00:07:53.025 Compare Command: Supported 00:07:53.025 Write Uncorrectable Command: Not Supported 00:07:53.025 Dataset Management Command: Supported 00:07:53.025 Write Zeroes Command: Supported 00:07:53.025 Set Features Save Field: Supported 00:07:53.025 Reservations: Not Supported 00:07:53.025 Timestamp: Supported 00:07:53.025 Copy: Supported 00:07:53.025 Volatile Write Cache: Present 00:07:53.025 Atomic Write Unit (Normal): 1 00:07:53.025 Atomic Write Unit (PFail): 1 00:07:53.025 Atomic Compare & Write Unit: 1 00:07:53.025 Fused Compare & Write: Not Supported 00:07:53.025 Scatter-Gather List 00:07:53.025 SGL Command Set: Supported 00:07:53.025 SGL Keyed: Not Supported 00:07:53.025 SGL Bit Bucket Descriptor: Not Supported 00:07:53.025 SGL Metadata Pointer: Not Supported 00:07:53.025 Oversized SGL: Not Supported 00:07:53.025 SGL Metadata Address: Not Supported 00:07:53.025 SGL Offset: Not Supported 00:07:53.025 Transport SGL Data Block: Not Supported 00:07:53.025 Replay Protected Memory Block: Not Supported 00:07:53.025 00:07:53.025 Firmware Slot Information 00:07:53.025 ========================= 00:07:53.025 Active slot: 1 00:07:53.025 Slot 1 Firmware Revision: 1.0 00:07:53.025 00:07:53.025 00:07:53.025 Commands Supported and Effects 00:07:53.025 ============================== 00:07:53.025 Admin Commands 00:07:53.025 -------------- 00:07:53.025 Delete I/O Submission Queue (00h): Supported 00:07:53.025 Create I/O Submission Queue (01h): Supported 00:07:53.025 Get Log Page (02h): Supported 00:07:53.025 Delete I/O Completion Queue (04h): Supported 00:07:53.025 Create I/O Completion Queue (05h): Supported 00:07:53.025 Identify (06h): Supported 00:07:53.025 Abort (08h): Supported 00:07:53.025 Set Features (09h): Supported 00:07:53.025 Get Features (0Ah): Supported 00:07:53.025 Asynchronous Event Request (0Ch): Supported 00:07:53.025 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:53.025 Directive Send (19h): Supported 00:07:53.025 Directive Receive (1Ah): Supported 00:07:53.025 Virtualization Management (1Ch): Supported 00:07:53.025 Doorbell Buffer Config (7Ch): Supported 00:07:53.025 Format NVM (80h): Supported LBA-Change 00:07:53.025 I/O Commands 00:07:53.025 ------------ 00:07:53.025 Flush (00h): Supported LBA-Change 00:07:53.025 Write (01h): Supported LBA-Change 00:07:53.025 Read (02h): Supported 00:07:53.025 Compare (05h): Supported 00:07:53.025 Write Zeroes (08h): Supported LBA-Change 00:07:53.025 Dataset Management (09h): Supported LBA-Change 00:07:53.025 Unknown (0Ch): Supported 00:07:53.025 Unknown (12h): Supported 00:07:53.025 Copy (19h): Supported LBA-Change 00:07:53.025 Unknown (1Dh): Supported LBA-Change 00:07:53.025 00:07:53.025 Error Log 00:07:53.025 ========= 00:07:53.025 00:07:53.025 Arbitration 00:07:53.025 =========== 00:07:53.025 Arbitration Burst: no limit 00:07:53.025 00:07:53.025 Power Management 00:07:53.025 ================ 00:07:53.025 Number of Power States: 1 00:07:53.025 Current Power State: Power State #0 00:07:53.025 Power State #0: 00:07:53.025 Max Power: 25.00 W 00:07:53.026 Non-Operational State: Operational 00:07:53.026 Entry Latency: 16 microseconds 00:07:53.026 Exit Latency: 4 microseconds 00:07:53.026 Relative Read Throughput: 0 00:07:53.026 Relative Read Latency: 0 00:07:53.026 Relative Write Throughput: 0 00:07:53.026 Relative Write Latency: 0 00:07:53.285 Idle Power: Not Reported 00:07:53.285 Active Power: Not Reported 00:07:53.285 Non-Operational Permissive Mode: Not Supported 00:07:53.285 00:07:53.285 Health Information 00:07:53.285 ================== 00:07:53.285 Critical Warnings: 00:07:53.285 Available Spare Space: OK 00:07:53.285 Temperature: OK 00:07:53.285 Device Reliability: OK 00:07:53.285 Read Only: No 00:07:53.285 Volatile Memory Backup: OK 00:07:53.285 Current Temperature: 323 Kelvin (50 Celsius) 00:07:53.285 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:53.285 Available Spare: 0% 00:07:53.285 Available Spare Threshold: 0% 00:07:53.285 Life Percentage Used: 0% 00:07:53.285 Data Units Read: 1050 00:07:53.285 Data Units Written: 922 00:07:53.285 Host Read Commands: 56039 00:07:53.285 Host Write Commands: 54912 00:07:53.285 Controller Busy Time: 0 minutes 00:07:53.285 Power Cycles: 0 00:07:53.285 Power On Hours: 0 hours 00:07:53.285 Unsafe Shutdowns: 0 00:07:53.285 Unrecoverable Media Errors: 0 00:07:53.285 Lifetime Error Log Entries: 0 00:07:53.285 Warning Temperature Time: 0 minutes 00:07:53.285 Critical Temperature Time: 0 minutes 00:07:53.285 00:07:53.285 Number of Queues 00:07:53.285 ================ 00:07:53.285 Number of I/O Submission Queues: 64 00:07:53.285 Number of I/O Completion Queues: 64 00:07:53.285 00:07:53.285 ZNS Specific Controller Data 00:07:53.285 ============================ 00:07:53.285 Zone Append Size Limit: 0 00:07:53.285 00:07:53.285 00:07:53.285 Active Namespaces 00:07:53.285 ================= 00:07:53.285 Namespace ID:1 00:07:53.285 Error Recovery Timeout: Unlimited 00:07:53.285 Command Set Identifier: NVM (00h) 00:07:53.285 Deallocate: Supported 00:07:53.285 Deallocated/Unwritten Error: Supported 00:07:53.285 Deallocated Read Value: All 0x00 00:07:53.285 Deallocate in Write Zeroes: Not Supported 00:07:53.285 Deallocated Guard Field: 0xFFFF 00:07:53.285 Flush: Supported 00:07:53.285 Reservation: Not Supported 00:07:53.285 Namespace Sharing Capabilities: Private 00:07:53.285 Size (in LBAs): 1310720 (5GiB) 00:07:53.285 Capacity (in LBAs): 1310720 (5GiB) 00:07:53.285 Utilization (in LBAs): 1310720 (5GiB) 00:07:53.285 Thin Provisioning: Not Supported 00:07:53.285 Per-NS Atomic Units: No 00:07:53.285 Maximum Single Source Range Length: 128 00:07:53.285 Maximum Copy Length: 128 00:07:53.285 Maximum Source Range Count: 128 00:07:53.285 NGUID/EUI64 Never Reused: No 00:07:53.285 Namespace Write Protected: No 00:07:53.285 Number of LBA Formats: 8 00:07:53.285 Current LBA Format: LBA Format #04 00:07:53.285 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:53.285 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:53.285 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:53.285 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:53.285 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:53.285 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:53.285 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:53.285 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:53.285 00:07:53.285 NVM Specific Namespace Data 00:07:53.285 =========================== 00:07:53.285 Logical Block Storage Tag Mask: 0 00:07:53.285 Protection Information Capabilities: 00:07:53.285 16b Guard Protection Information Storage Tag Support: No 00:07:53.285 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:53.285 Storage Tag Check Read Support: No 00:07:53.285 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.285 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.285 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.285 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.285 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.285 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.285 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.285 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.285 01:07:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:53.285 01:07:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:53.285 ===================================================== 00:07:53.285 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:53.285 ===================================================== 00:07:53.285 Controller Capabilities/Features 00:07:53.285 ================================ 00:07:53.285 Vendor ID: 1b36 00:07:53.285 Subsystem Vendor ID: 1af4 00:07:53.285 Serial Number: 12342 00:07:53.285 Model Number: QEMU NVMe Ctrl 00:07:53.285 Firmware Version: 8.0.0 00:07:53.285 Recommended Arb Burst: 6 00:07:53.285 IEEE OUI Identifier: 00 54 52 00:07:53.285 Multi-path I/O 00:07:53.285 May have multiple subsystem ports: No 00:07:53.285 May have multiple controllers: No 00:07:53.285 Associated with SR-IOV VF: No 00:07:53.285 Max Data Transfer Size: 524288 00:07:53.285 Max Number of Namespaces: 256 00:07:53.285 Max Number of I/O Queues: 64 00:07:53.285 NVMe Specification Version (VS): 1.4 00:07:53.285 NVMe Specification Version (Identify): 1.4 00:07:53.285 Maximum Queue Entries: 2048 00:07:53.285 Contiguous Queues Required: Yes 00:07:53.285 Arbitration Mechanisms Supported 00:07:53.285 Weighted Round Robin: Not Supported 00:07:53.285 Vendor Specific: Not Supported 00:07:53.285 Reset Timeout: 7500 ms 00:07:53.285 Doorbell Stride: 4 bytes 00:07:53.285 NVM Subsystem Reset: Not Supported 00:07:53.285 Command Sets Supported 00:07:53.285 NVM Command Set: Supported 00:07:53.285 Boot Partition: Not Supported 00:07:53.285 Memory Page Size Minimum: 4096 bytes 00:07:53.285 Memory Page Size Maximum: 65536 bytes 00:07:53.285 Persistent Memory Region: Not Supported 00:07:53.285 Optional Asynchronous Events Supported 00:07:53.285 Namespace Attribute Notices: Supported 00:07:53.285 Firmware Activation Notices: Not Supported 00:07:53.285 ANA Change Notices: Not Supported 00:07:53.285 PLE Aggregate Log Change Notices: Not Supported 00:07:53.285 LBA Status Info Alert Notices: Not Supported 00:07:53.285 EGE Aggregate Log Change Notices: Not Supported 00:07:53.285 Normal NVM Subsystem Shutdown event: Not Supported 00:07:53.285 Zone Descriptor Change Notices: Not Supported 00:07:53.285 Discovery Log Change Notices: Not Supported 00:07:53.285 Controller Attributes 00:07:53.285 128-bit Host Identifier: Not Supported 00:07:53.285 Non-Operational Permissive Mode: Not Supported 00:07:53.285 NVM Sets: Not Supported 00:07:53.285 Read Recovery Levels: Not Supported 00:07:53.285 Endurance Groups: Not Supported 00:07:53.285 Predictable Latency Mode: Not Supported 00:07:53.285 Traffic Based Keep ALive: Not Supported 00:07:53.285 Namespace Granularity: Not Supported 00:07:53.285 SQ Associations: Not Supported 00:07:53.285 UUID List: Not Supported 00:07:53.285 Multi-Domain Subsystem: Not Supported 00:07:53.285 Fixed Capacity Management: Not Supported 00:07:53.285 Variable Capacity Management: Not Supported 00:07:53.285 Delete Endurance Group: Not Supported 00:07:53.285 Delete NVM Set: Not Supported 00:07:53.285 Extended LBA Formats Supported: Supported 00:07:53.285 Flexible Data Placement Supported: Not Supported 00:07:53.285 00:07:53.285 Controller Memory Buffer Support 00:07:53.285 ================================ 00:07:53.285 Supported: No 00:07:53.285 00:07:53.285 Persistent Memory Region Support 00:07:53.285 ================================ 00:07:53.285 Supported: No 00:07:53.285 00:07:53.285 Admin Command Set Attributes 00:07:53.285 ============================ 00:07:53.285 Security Send/Receive: Not Supported 00:07:53.285 Format NVM: Supported 00:07:53.285 Firmware Activate/Download: Not Supported 00:07:53.285 Namespace Management: Supported 00:07:53.285 Device Self-Test: Not Supported 00:07:53.285 Directives: Supported 00:07:53.285 NVMe-MI: Not Supported 00:07:53.285 Virtualization Management: Not Supported 00:07:53.285 Doorbell Buffer Config: Supported 00:07:53.285 Get LBA Status Capability: Not Supported 00:07:53.285 Command & Feature Lockdown Capability: Not Supported 00:07:53.285 Abort Command Limit: 4 00:07:53.285 Async Event Request Limit: 4 00:07:53.285 Number of Firmware Slots: N/A 00:07:53.285 Firmware Slot 1 Read-Only: N/A 00:07:53.285 Firmware Activation Without Reset: N/A 00:07:53.285 Multiple Update Detection Support: N/A 00:07:53.285 Firmware Update Granularity: No Information Provided 00:07:53.285 Per-Namespace SMART Log: Yes 00:07:53.285 Asymmetric Namespace Access Log Page: Not Supported 00:07:53.286 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:53.286 Command Effects Log Page: Supported 00:07:53.286 Get Log Page Extended Data: Supported 00:07:53.286 Telemetry Log Pages: Not Supported 00:07:53.286 Persistent Event Log Pages: Not Supported 00:07:53.286 Supported Log Pages Log Page: May Support 00:07:53.286 Commands Supported & Effects Log Page: Not Supported 00:07:53.286 Feature Identifiers & Effects Log Page:May Support 00:07:53.286 NVMe-MI Commands & Effects Log Page: May Support 00:07:53.286 Data Area 4 for Telemetry Log: Not Supported 00:07:53.286 Error Log Page Entries Supported: 1 00:07:53.286 Keep Alive: Not Supported 00:07:53.286 00:07:53.286 NVM Command Set Attributes 00:07:53.286 ========================== 00:07:53.286 Submission Queue Entry Size 00:07:53.286 Max: 64 00:07:53.286 Min: 64 00:07:53.286 Completion Queue Entry Size 00:07:53.286 Max: 16 00:07:53.286 Min: 16 00:07:53.286 Number of Namespaces: 256 00:07:53.286 Compare Command: Supported 00:07:53.286 Write Uncorrectable Command: Not Supported 00:07:53.286 Dataset Management Command: Supported 00:07:53.286 Write Zeroes Command: Supported 00:07:53.286 Set Features Save Field: Supported 00:07:53.286 Reservations: Not Supported 00:07:53.286 Timestamp: Supported 00:07:53.286 Copy: Supported 00:07:53.286 Volatile Write Cache: Present 00:07:53.286 Atomic Write Unit (Normal): 1 00:07:53.286 Atomic Write Unit (PFail): 1 00:07:53.286 Atomic Compare & Write Unit: 1 00:07:53.286 Fused Compare & Write: Not Supported 00:07:53.286 Scatter-Gather List 00:07:53.286 SGL Command Set: Supported 00:07:53.286 SGL Keyed: Not Supported 00:07:53.286 SGL Bit Bucket Descriptor: Not Supported 00:07:53.286 SGL Metadata Pointer: Not Supported 00:07:53.286 Oversized SGL: Not Supported 00:07:53.286 SGL Metadata Address: Not Supported 00:07:53.286 SGL Offset: Not Supported 00:07:53.286 Transport SGL Data Block: Not Supported 00:07:53.286 Replay Protected Memory Block: Not Supported 00:07:53.286 00:07:53.286 Firmware Slot Information 00:07:53.286 ========================= 00:07:53.286 Active slot: 1 00:07:53.286 Slot 1 Firmware Revision: 1.0 00:07:53.286 00:07:53.286 00:07:53.286 Commands Supported and Effects 00:07:53.286 ============================== 00:07:53.286 Admin Commands 00:07:53.286 -------------- 00:07:53.286 Delete I/O Submission Queue (00h): Supported 00:07:53.286 Create I/O Submission Queue (01h): Supported 00:07:53.286 Get Log Page (02h): Supported 00:07:53.286 Delete I/O Completion Queue (04h): Supported 00:07:53.286 Create I/O Completion Queue (05h): Supported 00:07:53.286 Identify (06h): Supported 00:07:53.286 Abort (08h): Supported 00:07:53.286 Set Features (09h): Supported 00:07:53.286 Get Features (0Ah): Supported 00:07:53.286 Asynchronous Event Request (0Ch): Supported 00:07:53.286 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:53.286 Directive Send (19h): Supported 00:07:53.286 Directive Receive (1Ah): Supported 00:07:53.286 Virtualization Management (1Ch): Supported 00:07:53.286 Doorbell Buffer Config (7Ch): Supported 00:07:53.286 Format NVM (80h): Supported LBA-Change 00:07:53.286 I/O Commands 00:07:53.286 ------------ 00:07:53.286 Flush (00h): Supported LBA-Change 00:07:53.286 Write (01h): Supported LBA-Change 00:07:53.286 Read (02h): Supported 00:07:53.286 Compare (05h): Supported 00:07:53.286 Write Zeroes (08h): Supported LBA-Change 00:07:53.286 Dataset Management (09h): Supported LBA-Change 00:07:53.286 Unknown (0Ch): Supported 00:07:53.286 Unknown (12h): Supported 00:07:53.286 Copy (19h): Supported LBA-Change 00:07:53.286 Unknown (1Dh): Supported LBA-Change 00:07:53.286 00:07:53.286 Error Log 00:07:53.286 ========= 00:07:53.286 00:07:53.286 Arbitration 00:07:53.286 =========== 00:07:53.286 Arbitration Burst: no limit 00:07:53.286 00:07:53.286 Power Management 00:07:53.286 ================ 00:07:53.286 Number of Power States: 1 00:07:53.286 Current Power State: Power State #0 00:07:53.286 Power State #0: 00:07:53.286 Max Power: 25.00 W 00:07:53.286 Non-Operational State: Operational 00:07:53.286 Entry Latency: 16 microseconds 00:07:53.286 Exit Latency: 4 microseconds 00:07:53.286 Relative Read Throughput: 0 00:07:53.286 Relative Read Latency: 0 00:07:53.286 Relative Write Throughput: 0 00:07:53.286 Relative Write Latency: 0 00:07:53.286 Idle Power: Not Reported 00:07:53.286 Active Power: Not Reported 00:07:53.286 Non-Operational Permissive Mode: Not Supported 00:07:53.286 00:07:53.286 Health Information 00:07:53.286 ================== 00:07:53.286 Critical Warnings: 00:07:53.286 Available Spare Space: OK 00:07:53.286 Temperature: OK 00:07:53.286 Device Reliability: OK 00:07:53.286 Read Only: No 00:07:53.286 Volatile Memory Backup: OK 00:07:53.286 Current Temperature: 323 Kelvin (50 Celsius) 00:07:53.286 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:53.286 Available Spare: 0% 00:07:53.286 Available Spare Threshold: 0% 00:07:53.286 Life Percentage Used: 0% 00:07:53.286 Data Units Read: 2260 00:07:53.286 Data Units Written: 2048 00:07:53.286 Host Read Commands: 119480 00:07:53.286 Host Write Commands: 117749 00:07:53.286 Controller Busy Time: 0 minutes 00:07:53.286 Power Cycles: 0 00:07:53.286 Power On Hours: 0 hours 00:07:53.286 Unsafe Shutdowns: 0 00:07:53.286 Unrecoverable Media Errors: 0 00:07:53.286 Lifetime Error Log Entries: 0 00:07:53.286 Warning Temperature Time: 0 minutes 00:07:53.286 Critical Temperature Time: 0 minutes 00:07:53.286 00:07:53.286 Number of Queues 00:07:53.286 ================ 00:07:53.286 Number of I/O Submission Queues: 64 00:07:53.286 Number of I/O Completion Queues: 64 00:07:53.286 00:07:53.286 ZNS Specific Controller Data 00:07:53.286 ============================ 00:07:53.286 Zone Append Size Limit: 0 00:07:53.286 00:07:53.286 00:07:53.286 Active Namespaces 00:07:53.286 ================= 00:07:53.286 Namespace ID:1 00:07:53.286 Error Recovery Timeout: Unlimited 00:07:53.286 Command Set Identifier: NVM (00h) 00:07:53.286 Deallocate: Supported 00:07:53.286 Deallocated/Unwritten Error: Supported 00:07:53.286 Deallocated Read Value: All 0x00 00:07:53.286 Deallocate in Write Zeroes: Not Supported 00:07:53.286 Deallocated Guard Field: 0xFFFF 00:07:53.286 Flush: Supported 00:07:53.286 Reservation: Not Supported 00:07:53.286 Namespace Sharing Capabilities: Private 00:07:53.286 Size (in LBAs): 1048576 (4GiB) 00:07:53.286 Capacity (in LBAs): 1048576 (4GiB) 00:07:53.286 Utilization (in LBAs): 1048576 (4GiB) 00:07:53.286 Thin Provisioning: Not Supported 00:07:53.286 Per-NS Atomic Units: No 00:07:53.286 Maximum Single Source Range Length: 128 00:07:53.286 Maximum Copy Length: 128 00:07:53.286 Maximum Source Range Count: 128 00:07:53.286 NGUID/EUI64 Never Reused: No 00:07:53.286 Namespace Write Protected: No 00:07:53.286 Number of LBA Formats: 8 00:07:53.286 Current LBA Format: LBA Format #04 00:07:53.286 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:53.286 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:53.286 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:53.286 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:53.286 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:53.286 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:53.286 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:53.286 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:53.286 00:07:53.286 NVM Specific Namespace Data 00:07:53.286 =========================== 00:07:53.286 Logical Block Storage Tag Mask: 0 00:07:53.286 Protection Information Capabilities: 00:07:53.286 16b Guard Protection Information Storage Tag Support: No 00:07:53.286 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:53.286 Storage Tag Check Read Support: No 00:07:53.286 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.286 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.286 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.286 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.286 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.286 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.286 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.286 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.286 Namespace ID:2 00:07:53.286 Error Recovery Timeout: Unlimited 00:07:53.286 Command Set Identifier: NVM (00h) 00:07:53.286 Deallocate: Supported 00:07:53.286 Deallocated/Unwritten Error: Supported 00:07:53.286 Deallocated Read Value: All 0x00 00:07:53.286 Deallocate in Write Zeroes: Not Supported 00:07:53.286 Deallocated Guard Field: 0xFFFF 00:07:53.286 Flush: Supported 00:07:53.286 Reservation: Not Supported 00:07:53.286 Namespace Sharing Capabilities: Private 00:07:53.286 Size (in LBAs): 1048576 (4GiB) 00:07:53.286 Capacity (in LBAs): 1048576 (4GiB) 00:07:53.286 Utilization (in LBAs): 1048576 (4GiB) 00:07:53.286 Thin Provisioning: Not Supported 00:07:53.286 Per-NS Atomic Units: No 00:07:53.286 Maximum Single Source Range Length: 128 00:07:53.287 Maximum Copy Length: 128 00:07:53.287 Maximum Source Range Count: 128 00:07:53.287 NGUID/EUI64 Never Reused: No 00:07:53.287 Namespace Write Protected: No 00:07:53.287 Number of LBA Formats: 8 00:07:53.287 Current LBA Format: LBA Format #04 00:07:53.287 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:53.287 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:53.287 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:53.287 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:53.287 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:53.287 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:53.287 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:53.287 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:53.287 00:07:53.287 NVM Specific Namespace Data 00:07:53.287 =========================== 00:07:53.287 Logical Block Storage Tag Mask: 0 00:07:53.287 Protection Information Capabilities: 00:07:53.287 16b Guard Protection Information Storage Tag Support: No 00:07:53.287 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:53.287 Storage Tag Check Read Support: No 00:07:53.287 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.287 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.287 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.287 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.287 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.287 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.287 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.287 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.287 Namespace ID:3 00:07:53.287 Error Recovery Timeout: Unlimited 00:07:53.287 Command Set Identifier: NVM (00h) 00:07:53.287 Deallocate: Supported 00:07:53.287 Deallocated/Unwritten Error: Supported 00:07:53.287 Deallocated Read Value: All 0x00 00:07:53.287 Deallocate in Write Zeroes: Not Supported 00:07:53.287 Deallocated Guard Field: 0xFFFF 00:07:53.287 Flush: Supported 00:07:53.287 Reservation: Not Supported 00:07:53.287 Namespace Sharing Capabilities: Private 00:07:53.287 Size (in LBAs): 1048576 (4GiB) 00:07:53.287 Capacity (in LBAs): 1048576 (4GiB) 00:07:53.287 Utilization (in LBAs): 1048576 (4GiB) 00:07:53.287 Thin Provisioning: Not Supported 00:07:53.287 Per-NS Atomic Units: No 00:07:53.287 Maximum Single Source Range Length: 128 00:07:53.287 Maximum Copy Length: 128 00:07:53.287 Maximum Source Range Count: 128 00:07:53.287 NGUID/EUI64 Never Reused: No 00:07:53.287 Namespace Write Protected: No 00:07:53.287 Number of LBA Formats: 8 00:07:53.287 Current LBA Format: LBA Format #04 00:07:53.287 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:53.287 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:53.287 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:53.287 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:53.287 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:53.287 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:53.287 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:53.287 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:53.287 00:07:53.287 NVM Specific Namespace Data 00:07:53.287 =========================== 00:07:53.287 Logical Block Storage Tag Mask: 0 00:07:53.287 Protection Information Capabilities: 00:07:53.287 16b Guard Protection Information Storage Tag Support: No 00:07:53.287 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:53.287 Storage Tag Check Read Support: No 00:07:53.287 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.287 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.287 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.287 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.287 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.287 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.287 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.287 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.287 01:07:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:53.287 01:07:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:53.546 ===================================================== 00:07:53.546 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:53.546 ===================================================== 00:07:53.546 Controller Capabilities/Features 00:07:53.546 ================================ 00:07:53.546 Vendor ID: 1b36 00:07:53.546 Subsystem Vendor ID: 1af4 00:07:53.546 Serial Number: 12343 00:07:53.546 Model Number: QEMU NVMe Ctrl 00:07:53.546 Firmware Version: 8.0.0 00:07:53.546 Recommended Arb Burst: 6 00:07:53.546 IEEE OUI Identifier: 00 54 52 00:07:53.546 Multi-path I/O 00:07:53.546 May have multiple subsystem ports: No 00:07:53.546 May have multiple controllers: Yes 00:07:53.546 Associated with SR-IOV VF: No 00:07:53.546 Max Data Transfer Size: 524288 00:07:53.546 Max Number of Namespaces: 256 00:07:53.546 Max Number of I/O Queues: 64 00:07:53.546 NVMe Specification Version (VS): 1.4 00:07:53.546 NVMe Specification Version (Identify): 1.4 00:07:53.546 Maximum Queue Entries: 2048 00:07:53.546 Contiguous Queues Required: Yes 00:07:53.546 Arbitration Mechanisms Supported 00:07:53.546 Weighted Round Robin: Not Supported 00:07:53.546 Vendor Specific: Not Supported 00:07:53.546 Reset Timeout: 7500 ms 00:07:53.546 Doorbell Stride: 4 bytes 00:07:53.546 NVM Subsystem Reset: Not Supported 00:07:53.546 Command Sets Supported 00:07:53.546 NVM Command Set: Supported 00:07:53.546 Boot Partition: Not Supported 00:07:53.546 Memory Page Size Minimum: 4096 bytes 00:07:53.546 Memory Page Size Maximum: 65536 bytes 00:07:53.546 Persistent Memory Region: Not Supported 00:07:53.546 Optional Asynchronous Events Supported 00:07:53.546 Namespace Attribute Notices: Supported 00:07:53.546 Firmware Activation Notices: Not Supported 00:07:53.546 ANA Change Notices: Not Supported 00:07:53.546 PLE Aggregate Log Change Notices: Not Supported 00:07:53.546 LBA Status Info Alert Notices: Not Supported 00:07:53.546 EGE Aggregate Log Change Notices: Not Supported 00:07:53.546 Normal NVM Subsystem Shutdown event: Not Supported 00:07:53.546 Zone Descriptor Change Notices: Not Supported 00:07:53.546 Discovery Log Change Notices: Not Supported 00:07:53.546 Controller Attributes 00:07:53.546 128-bit Host Identifier: Not Supported 00:07:53.546 Non-Operational Permissive Mode: Not Supported 00:07:53.546 NVM Sets: Not Supported 00:07:53.546 Read Recovery Levels: Not Supported 00:07:53.546 Endurance Groups: Supported 00:07:53.546 Predictable Latency Mode: Not Supported 00:07:53.546 Traffic Based Keep ALive: Not Supported 00:07:53.546 Namespace Granularity: Not Supported 00:07:53.546 SQ Associations: Not Supported 00:07:53.546 UUID List: Not Supported 00:07:53.546 Multi-Domain Subsystem: Not Supported 00:07:53.546 Fixed Capacity Management: Not Supported 00:07:53.546 Variable Capacity Management: Not Supported 00:07:53.546 Delete Endurance Group: Not Supported 00:07:53.546 Delete NVM Set: Not Supported 00:07:53.546 Extended LBA Formats Supported: Supported 00:07:53.546 Flexible Data Placement Supported: Supported 00:07:53.546 00:07:53.546 Controller Memory Buffer Support 00:07:53.546 ================================ 00:07:53.546 Supported: No 00:07:53.546 00:07:53.546 Persistent Memory Region Support 00:07:53.546 ================================ 00:07:53.546 Supported: No 00:07:53.546 00:07:53.546 Admin Command Set Attributes 00:07:53.546 ============================ 00:07:53.546 Security Send/Receive: Not Supported 00:07:53.546 Format NVM: Supported 00:07:53.546 Firmware Activate/Download: Not Supported 00:07:53.546 Namespace Management: Supported 00:07:53.546 Device Self-Test: Not Supported 00:07:53.546 Directives: Supported 00:07:53.546 NVMe-MI: Not Supported 00:07:53.546 Virtualization Management: Not Supported 00:07:53.546 Doorbell Buffer Config: Supported 00:07:53.546 Get LBA Status Capability: Not Supported 00:07:53.546 Command & Feature Lockdown Capability: Not Supported 00:07:53.546 Abort Command Limit: 4 00:07:53.546 Async Event Request Limit: 4 00:07:53.546 Number of Firmware Slots: N/A 00:07:53.546 Firmware Slot 1 Read-Only: N/A 00:07:53.546 Firmware Activation Without Reset: N/A 00:07:53.546 Multiple Update Detection Support: N/A 00:07:53.547 Firmware Update Granularity: No Information Provided 00:07:53.547 Per-Namespace SMART Log: Yes 00:07:53.547 Asymmetric Namespace Access Log Page: Not Supported 00:07:53.547 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:53.547 Command Effects Log Page: Supported 00:07:53.547 Get Log Page Extended Data: Supported 00:07:53.547 Telemetry Log Pages: Not Supported 00:07:53.547 Persistent Event Log Pages: Not Supported 00:07:53.547 Supported Log Pages Log Page: May Support 00:07:53.547 Commands Supported & Effects Log Page: Not Supported 00:07:53.547 Feature Identifiers & Effects Log Page:May Support 00:07:53.547 NVMe-MI Commands & Effects Log Page: May Support 00:07:53.547 Data Area 4 for Telemetry Log: Not Supported 00:07:53.547 Error Log Page Entries Supported: 1 00:07:53.547 Keep Alive: Not Supported 00:07:53.547 00:07:53.547 NVM Command Set Attributes 00:07:53.547 ========================== 00:07:53.547 Submission Queue Entry Size 00:07:53.547 Max: 64 00:07:53.547 Min: 64 00:07:53.547 Completion Queue Entry Size 00:07:53.547 Max: 16 00:07:53.547 Min: 16 00:07:53.547 Number of Namespaces: 256 00:07:53.547 Compare Command: Supported 00:07:53.547 Write Uncorrectable Command: Not Supported 00:07:53.547 Dataset Management Command: Supported 00:07:53.547 Write Zeroes Command: Supported 00:07:53.547 Set Features Save Field: Supported 00:07:53.547 Reservations: Not Supported 00:07:53.547 Timestamp: Supported 00:07:53.547 Copy: Supported 00:07:53.547 Volatile Write Cache: Present 00:07:53.547 Atomic Write Unit (Normal): 1 00:07:53.547 Atomic Write Unit (PFail): 1 00:07:53.547 Atomic Compare & Write Unit: 1 00:07:53.547 Fused Compare & Write: Not Supported 00:07:53.547 Scatter-Gather List 00:07:53.547 SGL Command Set: Supported 00:07:53.547 SGL Keyed: Not Supported 00:07:53.547 SGL Bit Bucket Descriptor: Not Supported 00:07:53.547 SGL Metadata Pointer: Not Supported 00:07:53.547 Oversized SGL: Not Supported 00:07:53.547 SGL Metadata Address: Not Supported 00:07:53.547 SGL Offset: Not Supported 00:07:53.547 Transport SGL Data Block: Not Supported 00:07:53.547 Replay Protected Memory Block: Not Supported 00:07:53.547 00:07:53.547 Firmware Slot Information 00:07:53.547 ========================= 00:07:53.547 Active slot: 1 00:07:53.547 Slot 1 Firmware Revision: 1.0 00:07:53.547 00:07:53.547 00:07:53.547 Commands Supported and Effects 00:07:53.547 ============================== 00:07:53.547 Admin Commands 00:07:53.547 -------------- 00:07:53.547 Delete I/O Submission Queue (00h): Supported 00:07:53.547 Create I/O Submission Queue (01h): Supported 00:07:53.547 Get Log Page (02h): Supported 00:07:53.547 Delete I/O Completion Queue (04h): Supported 00:07:53.547 Create I/O Completion Queue (05h): Supported 00:07:53.547 Identify (06h): Supported 00:07:53.547 Abort (08h): Supported 00:07:53.547 Set Features (09h): Supported 00:07:53.547 Get Features (0Ah): Supported 00:07:53.547 Asynchronous Event Request (0Ch): Supported 00:07:53.547 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:53.547 Directive Send (19h): Supported 00:07:53.547 Directive Receive (1Ah): Supported 00:07:53.547 Virtualization Management (1Ch): Supported 00:07:53.547 Doorbell Buffer Config (7Ch): Supported 00:07:53.547 Format NVM (80h): Supported LBA-Change 00:07:53.547 I/O Commands 00:07:53.547 ------------ 00:07:53.547 Flush (00h): Supported LBA-Change 00:07:53.547 Write (01h): Supported LBA-Change 00:07:53.547 Read (02h): Supported 00:07:53.547 Compare (05h): Supported 00:07:53.547 Write Zeroes (08h): Supported LBA-Change 00:07:53.547 Dataset Management (09h): Supported LBA-Change 00:07:53.547 Unknown (0Ch): Supported 00:07:53.547 Unknown (12h): Supported 00:07:53.547 Copy (19h): Supported LBA-Change 00:07:53.547 Unknown (1Dh): Supported LBA-Change 00:07:53.547 00:07:53.547 Error Log 00:07:53.547 ========= 00:07:53.547 00:07:53.547 Arbitration 00:07:53.547 =========== 00:07:53.547 Arbitration Burst: no limit 00:07:53.547 00:07:53.547 Power Management 00:07:53.547 ================ 00:07:53.547 Number of Power States: 1 00:07:53.547 Current Power State: Power State #0 00:07:53.547 Power State #0: 00:07:53.547 Max Power: 25.00 W 00:07:53.547 Non-Operational State: Operational 00:07:53.547 Entry Latency: 16 microseconds 00:07:53.547 Exit Latency: 4 microseconds 00:07:53.547 Relative Read Throughput: 0 00:07:53.547 Relative Read Latency: 0 00:07:53.547 Relative Write Throughput: 0 00:07:53.547 Relative Write Latency: 0 00:07:53.547 Idle Power: Not Reported 00:07:53.547 Active Power: Not Reported 00:07:53.547 Non-Operational Permissive Mode: Not Supported 00:07:53.547 00:07:53.547 Health Information 00:07:53.547 ================== 00:07:53.547 Critical Warnings: 00:07:53.547 Available Spare Space: OK 00:07:53.547 Temperature: OK 00:07:53.547 Device Reliability: OK 00:07:53.547 Read Only: No 00:07:53.547 Volatile Memory Backup: OK 00:07:53.547 Current Temperature: 323 Kelvin (50 Celsius) 00:07:53.547 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:53.547 Available Spare: 0% 00:07:53.547 Available Spare Threshold: 0% 00:07:53.547 Life Percentage Used: 0% 00:07:53.547 Data Units Read: 854 00:07:53.547 Data Units Written: 783 00:07:53.547 Host Read Commands: 40770 00:07:53.547 Host Write Commands: 40194 00:07:53.547 Controller Busy Time: 0 minutes 00:07:53.547 Power Cycles: 0 00:07:53.547 Power On Hours: 0 hours 00:07:53.547 Unsafe Shutdowns: 0 00:07:53.547 Unrecoverable Media Errors: 0 00:07:53.547 Lifetime Error Log Entries: 0 00:07:53.547 Warning Temperature Time: 0 minutes 00:07:53.547 Critical Temperature Time: 0 minutes 00:07:53.547 00:07:53.547 Number of Queues 00:07:53.547 ================ 00:07:53.547 Number of I/O Submission Queues: 64 00:07:53.547 Number of I/O Completion Queues: 64 00:07:53.547 00:07:53.547 ZNS Specific Controller Data 00:07:53.547 ============================ 00:07:53.547 Zone Append Size Limit: 0 00:07:53.547 00:07:53.547 00:07:53.547 Active Namespaces 00:07:53.547 ================= 00:07:53.547 Namespace ID:1 00:07:53.547 Error Recovery Timeout: Unlimited 00:07:53.547 Command Set Identifier: NVM (00h) 00:07:53.547 Deallocate: Supported 00:07:53.547 Deallocated/Unwritten Error: Supported 00:07:53.547 Deallocated Read Value: All 0x00 00:07:53.547 Deallocate in Write Zeroes: Not Supported 00:07:53.547 Deallocated Guard Field: 0xFFFF 00:07:53.547 Flush: Supported 00:07:53.547 Reservation: Not Supported 00:07:53.547 Namespace Sharing Capabilities: Multiple Controllers 00:07:53.547 Size (in LBAs): 262144 (1GiB) 00:07:53.547 Capacity (in LBAs): 262144 (1GiB) 00:07:53.547 Utilization (in LBAs): 262144 (1GiB) 00:07:53.547 Thin Provisioning: Not Supported 00:07:53.547 Per-NS Atomic Units: No 00:07:53.547 Maximum Single Source Range Length: 128 00:07:53.547 Maximum Copy Length: 128 00:07:53.547 Maximum Source Range Count: 128 00:07:53.547 NGUID/EUI64 Never Reused: No 00:07:53.547 Namespace Write Protected: No 00:07:53.547 Endurance group ID: 1 00:07:53.547 Number of LBA Formats: 8 00:07:53.547 Current LBA Format: LBA Format #04 00:07:53.547 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:53.547 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:53.547 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:53.547 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:53.547 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:53.547 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:53.547 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:53.547 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:53.547 00:07:53.547 Get Feature FDP: 00:07:53.547 ================ 00:07:53.547 Enabled: Yes 00:07:53.547 FDP configuration index: 0 00:07:53.547 00:07:53.547 FDP configurations log page 00:07:53.547 =========================== 00:07:53.547 Number of FDP configurations: 1 00:07:53.547 Version: 0 00:07:53.547 Size: 112 00:07:53.547 FDP Configuration Descriptor: 0 00:07:53.547 Descriptor Size: 96 00:07:53.547 Reclaim Group Identifier format: 2 00:07:53.547 FDP Volatile Write Cache: Not Present 00:07:53.547 FDP Configuration: Valid 00:07:53.548 Vendor Specific Size: 0 00:07:53.548 Number of Reclaim Groups: 2 00:07:53.548 Number of Recalim Unit Handles: 8 00:07:53.548 Max Placement Identifiers: 128 00:07:53.548 Number of Namespaces Suppprted: 256 00:07:53.548 Reclaim unit Nominal Size: 6000000 bytes 00:07:53.548 Estimated Reclaim Unit Time Limit: Not Reported 00:07:53.548 RUH Desc #000: RUH Type: Initially Isolated 00:07:53.548 RUH Desc #001: RUH Type: Initially Isolated 00:07:53.548 RUH Desc #002: RUH Type: Initially Isolated 00:07:53.548 RUH Desc #003: RUH Type: Initially Isolated 00:07:53.548 RUH Desc #004: RUH Type: Initially Isolated 00:07:53.548 RUH Desc #005: RUH Type: Initially Isolated 00:07:53.548 RUH Desc #006: RUH Type: Initially Isolated 00:07:53.548 RUH Desc #007: RUH Type: Initially Isolated 00:07:53.548 00:07:53.548 FDP reclaim unit handle usage log page 00:07:53.548 ====================================== 00:07:53.548 Number of Reclaim Unit Handles: 8 00:07:53.548 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:53.548 RUH Usage Desc #001: RUH Attributes: Unused 00:07:53.548 RUH Usage Desc #002: RUH Attributes: Unused 00:07:53.548 RUH Usage Desc #003: RUH Attributes: Unused 00:07:53.548 RUH Usage Desc #004: RUH Attributes: Unused 00:07:53.548 RUH Usage Desc #005: RUH Attributes: Unused 00:07:53.548 RUH Usage Desc #006: RUH Attributes: Unused 00:07:53.548 RUH Usage Desc #007: RUH Attributes: Unused 00:07:53.548 00:07:53.548 FDP statistics log page 00:07:53.548 ======================= 00:07:53.548 Host bytes with metadata written: 512139264 00:07:53.548 Media bytes with metadata written: 512196608 00:07:53.548 Media bytes erased: 0 00:07:53.548 00:07:53.548 FDP events log page 00:07:53.548 =================== 00:07:53.548 Number of FDP events: 0 00:07:53.548 00:07:53.548 NVM Specific Namespace Data 00:07:53.548 =========================== 00:07:53.548 Logical Block Storage Tag Mask: 0 00:07:53.548 Protection Information Capabilities: 00:07:53.548 16b Guard Protection Information Storage Tag Support: No 00:07:53.548 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:53.548 Storage Tag Check Read Support: No 00:07:53.548 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.548 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.548 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.548 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.548 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.548 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.548 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.548 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.548 ************************************ 00:07:53.548 END TEST nvme_identify 00:07:53.548 ************************************ 00:07:53.548 00:07:53.548 real 0m1.010s 00:07:53.548 user 0m0.360s 00:07:53.548 sys 0m0.439s 00:07:53.548 01:07:27 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.548 01:07:27 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:53.548 01:07:27 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:53.548 01:07:27 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.548 01:07:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.548 01:07:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:53.548 ************************************ 00:07:53.548 START TEST nvme_perf 00:07:53.548 ************************************ 00:07:53.548 01:07:27 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:53.548 01:07:27 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:54.962 Initializing NVMe Controllers 00:07:54.962 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:54.962 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:54.962 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:54.962 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:54.962 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:54.962 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:54.962 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:54.962 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:54.962 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:54.962 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:54.962 Initialization complete. Launching workers. 00:07:54.962 ======================================================== 00:07:54.962 Latency(us) 00:07:54.962 Device Information : IOPS MiB/s Average min max 00:07:54.962 PCIE (0000:00:13.0) NSID 1 from core 0: 18452.34 216.24 6939.74 4510.04 20024.64 00:07:54.962 PCIE (0000:00:10.0) NSID 1 from core 0: 18452.34 216.24 6932.68 4236.38 19780.75 00:07:54.962 PCIE (0000:00:11.0) NSID 1 from core 0: 18452.34 216.24 6927.69 4100.07 19022.42 00:07:54.962 PCIE (0000:00:12.0) NSID 1 from core 0: 18452.34 216.24 6921.86 3587.21 18824.21 00:07:54.962 PCIE (0000:00:12.0) NSID 2 from core 0: 18452.34 216.24 6915.86 3387.24 18237.89 00:07:54.962 PCIE (0000:00:12.0) NSID 3 from core 0: 18452.34 216.24 6910.03 3245.17 17652.10 00:07:54.962 ======================================================== 00:07:54.962 Total : 110714.05 1297.43 6924.64 3245.17 20024.64 00:07:54.962 00:07:54.962 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:54.962 ================================================================================= 00:07:54.962 1.00000% : 5822.622us 00:07:54.962 10.00000% : 6049.477us 00:07:54.962 25.00000% : 6326.745us 00:07:54.962 50.00000% : 6604.012us 00:07:54.962 75.00000% : 6956.898us 00:07:54.962 90.00000% : 7713.083us 00:07:54.962 95.00000% : 9880.812us 00:07:54.962 98.00000% : 11998.129us 00:07:54.962 99.00000% : 13510.498us 00:07:54.962 99.50000% : 14619.569us 00:07:54.962 99.90000% : 19761.625us 00:07:54.962 99.99000% : 20064.098us 00:07:54.962 99.99900% : 20064.098us 00:07:54.962 99.99990% : 20064.098us 00:07:54.962 99.99999% : 20064.098us 00:07:54.962 00:07:54.962 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:54.962 ================================================================================= 00:07:54.962 1.00000% : 5721.797us 00:07:54.962 10.00000% : 6024.271us 00:07:54.962 25.00000% : 6301.538us 00:07:54.962 50.00000% : 6604.012us 00:07:54.962 75.00000% : 7007.311us 00:07:54.962 90.00000% : 7662.671us 00:07:54.962 95.00000% : 9981.637us 00:07:54.962 98.00000% : 11998.129us 00:07:54.962 99.00000% : 13812.972us 00:07:54.962 99.50000% : 14821.218us 00:07:54.962 99.90000% : 19358.326us 00:07:54.962 99.99000% : 19862.449us 00:07:54.962 99.99900% : 19862.449us 00:07:54.962 99.99990% : 19862.449us 00:07:54.962 99.99999% : 19862.449us 00:07:54.962 00:07:54.962 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:54.962 ================================================================================= 00:07:54.962 1.00000% : 5797.415us 00:07:54.962 10.00000% : 6049.477us 00:07:54.962 25.00000% : 6326.745us 00:07:54.962 50.00000% : 6604.012us 00:07:54.962 75.00000% : 6956.898us 00:07:54.962 90.00000% : 7763.495us 00:07:54.962 95.00000% : 9931.225us 00:07:54.962 98.00000% : 11746.068us 00:07:54.962 99.00000% : 13812.972us 00:07:54.962 99.50000% : 14518.745us 00:07:54.962 99.90000% : 18753.378us 00:07:54.963 99.99000% : 19055.852us 00:07:54.963 99.99900% : 19055.852us 00:07:54.963 99.99990% : 19055.852us 00:07:54.963 99.99999% : 19055.852us 00:07:54.963 00:07:54.963 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:54.963 ================================================================================= 00:07:54.963 1.00000% : 5797.415us 00:07:54.963 10.00000% : 6049.477us 00:07:54.963 25.00000% : 6301.538us 00:07:54.963 50.00000% : 6604.012us 00:07:54.963 75.00000% : 6956.898us 00:07:54.963 90.00000% : 7864.320us 00:07:54.963 95.00000% : 9931.225us 00:07:54.963 98.00000% : 11846.892us 00:07:54.963 99.00000% : 13712.148us 00:07:54.963 99.50000% : 14518.745us 00:07:54.963 99.90000% : 18551.729us 00:07:54.963 99.99000% : 18854.203us 00:07:54.963 99.99900% : 18854.203us 00:07:54.963 99.99990% : 18854.203us 00:07:54.963 99.99999% : 18854.203us 00:07:54.963 00:07:54.963 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:54.963 ================================================================================= 00:07:54.963 1.00000% : 5797.415us 00:07:54.963 10.00000% : 6049.477us 00:07:54.963 25.00000% : 6301.538us 00:07:54.963 50.00000% : 6604.012us 00:07:54.963 75.00000% : 6956.898us 00:07:54.963 90.00000% : 7914.732us 00:07:54.963 95.00000% : 9779.988us 00:07:54.963 98.00000% : 11846.892us 00:07:54.963 99.00000% : 13308.849us 00:07:54.963 99.50000% : 14216.271us 00:07:54.963 99.90000% : 17946.782us 00:07:54.963 99.99000% : 18249.255us 00:07:54.963 99.99900% : 18249.255us 00:07:54.963 99.99990% : 18249.255us 00:07:54.963 99.99999% : 18249.255us 00:07:54.963 00:07:54.963 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:54.963 ================================================================================= 00:07:54.963 1.00000% : 5797.415us 00:07:54.963 10.00000% : 6049.477us 00:07:54.963 25.00000% : 6301.538us 00:07:54.963 50.00000% : 6604.012us 00:07:54.963 75.00000% : 6956.898us 00:07:54.963 90.00000% : 7713.083us 00:07:54.963 95.00000% : 9729.575us 00:07:54.963 98.00000% : 11998.129us 00:07:54.963 99.00000% : 12855.138us 00:07:54.963 99.50000% : 14115.446us 00:07:54.963 99.90000% : 17442.658us 00:07:54.963 99.99000% : 17644.308us 00:07:54.963 99.99900% : 17745.132us 00:07:54.963 99.99990% : 17745.132us 00:07:54.963 99.99999% : 17745.132us 00:07:54.963 00:07:54.963 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:54.963 ============================================================================== 00:07:54.963 Range in us Cumulative IO count 00:07:54.963 4486.695 - 4511.902: 0.0054% ( 1) 00:07:54.963 4511.902 - 4537.108: 0.0270% ( 4) 00:07:54.963 4537.108 - 4562.314: 0.0378% ( 2) 00:07:54.963 4562.314 - 4587.520: 0.0433% ( 1) 00:07:54.963 4587.520 - 4612.726: 0.0541% ( 2) 00:07:54.963 4612.726 - 4637.932: 0.0757% ( 4) 00:07:54.963 4637.932 - 4663.138: 0.0865% ( 2) 00:07:54.963 4663.138 - 4688.345: 0.1081% ( 4) 00:07:54.963 4688.345 - 4713.551: 0.1189% ( 2) 00:07:54.963 4713.551 - 4738.757: 0.1298% ( 2) 00:07:54.963 4738.757 - 4763.963: 0.1460% ( 3) 00:07:54.963 4763.963 - 4789.169: 0.1568% ( 2) 00:07:54.963 4789.169 - 4814.375: 0.1676% ( 2) 00:07:54.963 4814.375 - 4839.582: 0.1784% ( 2) 00:07:54.963 4839.582 - 4864.788: 0.1946% ( 3) 00:07:54.963 4864.788 - 4889.994: 0.2054% ( 2) 00:07:54.963 4889.994 - 4915.200: 0.2163% ( 2) 00:07:54.963 4915.200 - 4940.406: 0.2271% ( 2) 00:07:54.963 4940.406 - 4965.612: 0.2433% ( 3) 00:07:54.963 4965.612 - 4990.818: 0.2541% ( 2) 00:07:54.963 4990.818 - 5016.025: 0.2649% ( 2) 00:07:54.963 5016.025 - 5041.231: 0.2811% ( 3) 00:07:54.963 5041.231 - 5066.437: 0.2920% ( 2) 00:07:54.963 5066.437 - 5091.643: 0.3028% ( 2) 00:07:54.963 5091.643 - 5116.849: 0.3190% ( 3) 00:07:54.963 5116.849 - 5142.055: 0.3298% ( 2) 00:07:54.963 5142.055 - 5167.262: 0.3406% ( 2) 00:07:54.963 5167.262 - 5192.468: 0.3460% ( 1) 00:07:54.963 5696.591 - 5721.797: 0.3839% ( 7) 00:07:54.963 5721.797 - 5747.003: 0.4704% ( 16) 00:07:54.963 5747.003 - 5772.209: 0.6272% ( 29) 00:07:54.963 5772.209 - 5797.415: 0.9407% ( 58) 00:07:54.963 5797.415 - 5822.622: 1.3949% ( 84) 00:07:54.963 5822.622 - 5847.828: 2.2221% ( 153) 00:07:54.963 5847.828 - 5873.034: 2.9574% ( 136) 00:07:54.963 5873.034 - 5898.240: 3.7522% ( 147) 00:07:54.963 5898.240 - 5923.446: 4.4658% ( 132) 00:07:54.963 5923.446 - 5948.652: 5.4282% ( 178) 00:07:54.963 5948.652 - 5973.858: 6.5311% ( 204) 00:07:54.963 5973.858 - 5999.065: 7.8341% ( 241) 00:07:54.963 5999.065 - 6024.271: 9.0939% ( 233) 00:07:54.963 6024.271 - 6049.477: 10.1914% ( 203) 00:07:54.963 6049.477 - 6074.683: 11.3214% ( 209) 00:07:54.963 6074.683 - 6099.889: 12.5324% ( 224) 00:07:54.963 6099.889 - 6125.095: 13.7976% ( 234) 00:07:54.963 6125.095 - 6150.302: 15.1654% ( 253) 00:07:54.963 6150.302 - 6175.508: 16.5441% ( 255) 00:07:54.963 6175.508 - 6200.714: 17.9282% ( 256) 00:07:54.963 6200.714 - 6225.920: 19.4907% ( 289) 00:07:54.963 6225.920 - 6251.126: 21.1451% ( 306) 00:07:54.963 6251.126 - 6276.332: 22.8590% ( 317) 00:07:54.963 6276.332 - 6301.538: 24.7026% ( 341) 00:07:54.963 6301.538 - 6326.745: 26.8274% ( 393) 00:07:54.963 6326.745 - 6351.951: 28.9252% ( 388) 00:07:54.963 6351.951 - 6377.157: 31.2500% ( 430) 00:07:54.963 6377.157 - 6402.363: 33.5748% ( 430) 00:07:54.963 6402.363 - 6427.569: 35.8348% ( 418) 00:07:54.963 6427.569 - 6452.775: 38.2515% ( 447) 00:07:54.963 6452.775 - 6503.188: 43.0580% ( 889) 00:07:54.963 6503.188 - 6553.600: 47.9022% ( 896) 00:07:54.963 6553.600 - 6604.012: 52.6925% ( 886) 00:07:54.963 6604.012 - 6654.425: 56.8934% ( 777) 00:07:54.963 6654.425 - 6704.837: 60.7969% ( 722) 00:07:54.963 6704.837 - 6755.249: 64.4464% ( 675) 00:07:54.963 6755.249 - 6805.662: 67.7119% ( 604) 00:07:54.963 6805.662 - 6856.074: 70.6207% ( 538) 00:07:54.963 6856.074 - 6906.486: 73.4105% ( 516) 00:07:54.963 6906.486 - 6956.898: 76.1462% ( 506) 00:07:54.963 6956.898 - 7007.311: 78.7576% ( 483) 00:07:54.963 7007.311 - 7057.723: 81.2013% ( 452) 00:07:54.963 7057.723 - 7108.135: 83.3423% ( 396) 00:07:54.963 7108.135 - 7158.548: 85.1049% ( 326) 00:07:54.963 7158.548 - 7208.960: 86.3971% ( 239) 00:07:54.963 7208.960 - 7259.372: 87.2351% ( 155) 00:07:54.963 7259.372 - 7309.785: 87.8839% ( 120) 00:07:54.963 7309.785 - 7360.197: 88.3164% ( 80) 00:07:54.963 7360.197 - 7410.609: 88.7327% ( 77) 00:07:54.963 7410.609 - 7461.022: 89.0301% ( 55) 00:07:54.963 7461.022 - 7511.434: 89.3436% ( 58) 00:07:54.963 7511.434 - 7561.846: 89.5923% ( 46) 00:07:54.963 7561.846 - 7612.258: 89.8086% ( 40) 00:07:54.963 7612.258 - 7662.671: 89.9870% ( 33) 00:07:54.963 7662.671 - 7713.083: 90.1330% ( 27) 00:07:54.963 7713.083 - 7763.495: 90.2682% ( 25) 00:07:54.963 7763.495 - 7813.908: 90.3979% ( 24) 00:07:54.963 7813.908 - 7864.320: 90.5547% ( 29) 00:07:54.963 7864.320 - 7914.732: 90.6628% ( 20) 00:07:54.963 7914.732 - 7965.145: 90.7602% ( 18) 00:07:54.963 7965.145 - 8015.557: 90.8575% ( 18) 00:07:54.963 8015.557 - 8065.969: 90.9494% ( 17) 00:07:54.963 8065.969 - 8116.382: 91.0467% ( 18) 00:07:54.963 8116.382 - 8166.794: 91.1440% ( 18) 00:07:54.963 8166.794 - 8217.206: 91.2413% ( 18) 00:07:54.963 8217.206 - 8267.618: 91.3333% ( 17) 00:07:54.963 8267.618 - 8318.031: 91.4414% ( 20) 00:07:54.963 8318.031 - 8368.443: 91.5279% ( 16) 00:07:54.963 8368.443 - 8418.855: 91.6036% ( 14) 00:07:54.963 8418.855 - 8469.268: 91.6847% ( 15) 00:07:54.963 8469.268 - 8519.680: 91.7658% ( 15) 00:07:54.963 8519.680 - 8570.092: 91.8523% ( 16) 00:07:54.963 8570.092 - 8620.505: 91.9496% ( 18) 00:07:54.963 8620.505 - 8670.917: 92.0361% ( 16) 00:07:54.963 8670.917 - 8721.329: 92.1226% ( 16) 00:07:54.963 8721.329 - 8771.742: 92.2145% ( 17) 00:07:54.963 8771.742 - 8822.154: 92.2956% ( 15) 00:07:54.963 8822.154 - 8872.566: 92.3929% ( 18) 00:07:54.963 8872.566 - 8922.978: 92.5011% ( 20) 00:07:54.963 8922.978 - 8973.391: 92.5876% ( 16) 00:07:54.963 8973.391 - 9023.803: 92.7498% ( 30) 00:07:54.963 9023.803 - 9074.215: 92.9012% ( 28) 00:07:54.963 9074.215 - 9124.628: 93.0417% ( 26) 00:07:54.963 9124.628 - 9175.040: 93.1823% ( 26) 00:07:54.963 9175.040 - 9225.452: 93.3229% ( 26) 00:07:54.963 9225.452 - 9275.865: 93.4472% ( 23) 00:07:54.963 9275.865 - 9326.277: 93.5878% ( 26) 00:07:54.963 9326.277 - 9376.689: 93.7122% ( 23) 00:07:54.963 9376.689 - 9427.102: 93.8365% ( 23) 00:07:54.963 9427.102 - 9477.514: 93.9663% ( 24) 00:07:54.963 9477.514 - 9527.926: 94.1014% ( 25) 00:07:54.963 9527.926 - 9578.338: 94.2420% ( 26) 00:07:54.963 9578.338 - 9628.751: 94.3772% ( 25) 00:07:54.963 9628.751 - 9679.163: 94.5069% ( 24) 00:07:54.963 9679.163 - 9729.575: 94.6475% ( 26) 00:07:54.963 9729.575 - 9779.988: 94.7935% ( 27) 00:07:54.963 9779.988 - 9830.400: 94.9124% ( 22) 00:07:54.963 9830.400 - 9880.812: 95.0151% ( 19) 00:07:54.963 9880.812 - 9931.225: 95.0908% ( 14) 00:07:54.963 9931.225 - 9981.637: 95.1665% ( 14) 00:07:54.963 9981.637 - 10032.049: 95.2206% ( 10) 00:07:54.963 10032.049 - 10082.462: 95.2747% ( 10) 00:07:54.963 10082.462 - 10132.874: 95.3233% ( 9) 00:07:54.963 10132.874 - 10183.286: 95.3503% ( 5) 00:07:54.963 10183.286 - 10233.698: 95.3828% ( 6) 00:07:54.963 10233.698 - 10284.111: 95.4423% ( 11) 00:07:54.963 10284.111 - 10334.523: 95.5234% ( 15) 00:07:54.963 10334.523 - 10384.935: 95.5720% ( 9) 00:07:54.963 10384.935 - 10435.348: 95.6099% ( 7) 00:07:54.963 10435.348 - 10485.760: 95.6639% ( 10) 00:07:54.963 10485.760 - 10536.172: 95.7180% ( 10) 00:07:54.963 10536.172 - 10586.585: 95.7829% ( 12) 00:07:54.963 10586.585 - 10636.997: 95.8478% ( 12) 00:07:54.963 10636.997 - 10687.409: 95.9126% ( 12) 00:07:54.963 10687.409 - 10737.822: 95.9613% ( 9) 00:07:54.963 10737.822 - 10788.234: 96.0154% ( 10) 00:07:54.963 10788.234 - 10838.646: 96.0802% ( 12) 00:07:54.963 10838.646 - 10889.058: 96.1289% ( 9) 00:07:54.964 10889.058 - 10939.471: 96.1776% ( 9) 00:07:54.964 10939.471 - 10989.883: 96.2911% ( 21) 00:07:54.964 10989.883 - 11040.295: 96.3722% ( 15) 00:07:54.964 11040.295 - 11090.708: 96.4695% ( 18) 00:07:54.964 11090.708 - 11141.120: 96.5560% ( 16) 00:07:54.964 11141.120 - 11191.532: 96.6533% ( 18) 00:07:54.964 11191.532 - 11241.945: 96.7452% ( 17) 00:07:54.964 11241.945 - 11292.357: 96.8317% ( 16) 00:07:54.964 11292.357 - 11342.769: 96.9291% ( 18) 00:07:54.964 11342.769 - 11393.182: 97.0210% ( 17) 00:07:54.964 11393.182 - 11443.594: 97.1237% ( 19) 00:07:54.964 11443.594 - 11494.006: 97.2156% ( 17) 00:07:54.964 11494.006 - 11544.418: 97.3075% ( 17) 00:07:54.964 11544.418 - 11594.831: 97.4103% ( 19) 00:07:54.964 11594.831 - 11645.243: 97.5022% ( 17) 00:07:54.964 11645.243 - 11695.655: 97.6103% ( 20) 00:07:54.964 11695.655 - 11746.068: 97.7076% ( 18) 00:07:54.964 11746.068 - 11796.480: 97.8049% ( 18) 00:07:54.964 11796.480 - 11846.892: 97.8752% ( 13) 00:07:54.964 11846.892 - 11897.305: 97.9401% ( 12) 00:07:54.964 11897.305 - 11947.717: 97.9996% ( 11) 00:07:54.964 11947.717 - 11998.129: 98.0590% ( 11) 00:07:54.964 11998.129 - 12048.542: 98.1293% ( 13) 00:07:54.964 12048.542 - 12098.954: 98.1834% ( 10) 00:07:54.964 12098.954 - 12149.366: 98.2321% ( 9) 00:07:54.964 12149.366 - 12199.778: 98.2753% ( 8) 00:07:54.964 12199.778 - 12250.191: 98.3186% ( 8) 00:07:54.964 12250.191 - 12300.603: 98.3564% ( 7) 00:07:54.964 12300.603 - 12351.015: 98.3942% ( 7) 00:07:54.964 12351.015 - 12401.428: 98.4375% ( 8) 00:07:54.964 12401.428 - 12451.840: 98.4808% ( 8) 00:07:54.964 12451.840 - 12502.252: 98.5240% ( 8) 00:07:54.964 12502.252 - 12552.665: 98.5619% ( 7) 00:07:54.964 12552.665 - 12603.077: 98.5997% ( 7) 00:07:54.964 12603.077 - 12653.489: 98.6484% ( 9) 00:07:54.964 12653.489 - 12703.902: 98.6862% ( 7) 00:07:54.964 12703.902 - 12754.314: 98.7295% ( 8) 00:07:54.964 12754.314 - 12804.726: 98.7457% ( 3) 00:07:54.964 12804.726 - 12855.138: 98.7619% ( 3) 00:07:54.964 12855.138 - 12905.551: 98.7835% ( 4) 00:07:54.964 12905.551 - 13006.375: 98.8268% ( 8) 00:07:54.964 13006.375 - 13107.200: 98.8592% ( 6) 00:07:54.964 13107.200 - 13208.025: 98.8971% ( 7) 00:07:54.964 13208.025 - 13308.849: 98.9403% ( 8) 00:07:54.964 13308.849 - 13409.674: 98.9728% ( 6) 00:07:54.964 13409.674 - 13510.498: 99.0214% ( 9) 00:07:54.964 13510.498 - 13611.323: 99.0484% ( 5) 00:07:54.964 13611.323 - 13712.148: 99.0647% ( 3) 00:07:54.964 13712.148 - 13812.972: 99.0917% ( 5) 00:07:54.964 13812.972 - 13913.797: 99.1404% ( 9) 00:07:54.964 13913.797 - 14014.622: 99.1944% ( 10) 00:07:54.964 14014.622 - 14115.446: 99.2485% ( 10) 00:07:54.964 14115.446 - 14216.271: 99.3026% ( 10) 00:07:54.964 14216.271 - 14317.095: 99.3566% ( 10) 00:07:54.964 14317.095 - 14417.920: 99.4053% ( 9) 00:07:54.964 14417.920 - 14518.745: 99.4593% ( 10) 00:07:54.964 14518.745 - 14619.569: 99.5080% ( 9) 00:07:54.964 14619.569 - 14720.394: 99.5567% ( 9) 00:07:54.964 14720.394 - 14821.218: 99.5891% ( 6) 00:07:54.964 14821.218 - 14922.043: 99.6053% ( 3) 00:07:54.964 14922.043 - 15022.868: 99.6215% ( 3) 00:07:54.964 15022.868 - 15123.692: 99.6378% ( 3) 00:07:54.964 15123.692 - 15224.517: 99.6540% ( 3) 00:07:54.964 18955.028 - 19055.852: 99.6810% ( 5) 00:07:54.964 19055.852 - 19156.677: 99.7135% ( 6) 00:07:54.964 19156.677 - 19257.502: 99.7459% ( 6) 00:07:54.964 19257.502 - 19358.326: 99.7783% ( 6) 00:07:54.964 19358.326 - 19459.151: 99.8162% ( 7) 00:07:54.964 19459.151 - 19559.975: 99.8486% ( 6) 00:07:54.964 19559.975 - 19660.800: 99.8811% ( 6) 00:07:54.964 19660.800 - 19761.625: 99.9189% ( 7) 00:07:54.964 19761.625 - 19862.449: 99.9459% ( 5) 00:07:54.964 19862.449 - 19963.274: 99.9838% ( 7) 00:07:54.964 19963.274 - 20064.098: 100.0000% ( 3) 00:07:54.964 00:07:54.964 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:54.964 ============================================================================== 00:07:54.964 Range in us Cumulative IO count 00:07:54.964 4234.634 - 4259.840: 0.0162% ( 3) 00:07:54.964 4259.840 - 4285.046: 0.0324% ( 3) 00:07:54.964 4285.046 - 4310.252: 0.0378% ( 1) 00:07:54.964 4310.252 - 4335.458: 0.0487% ( 2) 00:07:54.964 4335.458 - 4360.665: 0.0595% ( 2) 00:07:54.964 4360.665 - 4385.871: 0.0649% ( 1) 00:07:54.964 4385.871 - 4411.077: 0.0811% ( 3) 00:07:54.964 4411.077 - 4436.283: 0.0919% ( 2) 00:07:54.964 4436.283 - 4461.489: 0.0973% ( 1) 00:07:54.964 4461.489 - 4486.695: 0.1135% ( 3) 00:07:54.964 4486.695 - 4511.902: 0.1189% ( 1) 00:07:54.964 4511.902 - 4537.108: 0.1298% ( 2) 00:07:54.964 4537.108 - 4562.314: 0.1406% ( 2) 00:07:54.964 4562.314 - 4587.520: 0.1514% ( 2) 00:07:54.964 4587.520 - 4612.726: 0.1676% ( 3) 00:07:54.964 4612.726 - 4637.932: 0.1730% ( 1) 00:07:54.964 4637.932 - 4663.138: 0.1892% ( 3) 00:07:54.964 4663.138 - 4688.345: 0.2000% ( 2) 00:07:54.964 4688.345 - 4713.551: 0.2054% ( 1) 00:07:54.964 4713.551 - 4738.757: 0.2217% ( 3) 00:07:54.964 4738.757 - 4763.963: 0.2325% ( 2) 00:07:54.964 4763.963 - 4789.169: 0.2433% ( 2) 00:07:54.964 4789.169 - 4814.375: 0.2487% ( 1) 00:07:54.964 4814.375 - 4839.582: 0.2649% ( 3) 00:07:54.964 4839.582 - 4864.788: 0.2703% ( 1) 00:07:54.964 4864.788 - 4889.994: 0.2811% ( 2) 00:07:54.964 4889.994 - 4915.200: 0.2974% ( 3) 00:07:54.964 4915.200 - 4940.406: 0.3028% ( 1) 00:07:54.964 4940.406 - 4965.612: 0.3190% ( 3) 00:07:54.964 4965.612 - 4990.818: 0.3298% ( 2) 00:07:54.964 4990.818 - 5016.025: 0.3406% ( 2) 00:07:54.964 5016.025 - 5041.231: 0.3460% ( 1) 00:07:54.964 5595.766 - 5620.972: 0.3568% ( 2) 00:07:54.964 5620.972 - 5646.178: 0.4487% ( 17) 00:07:54.964 5646.178 - 5671.385: 0.5893% ( 26) 00:07:54.964 5671.385 - 5696.591: 0.7623% ( 32) 00:07:54.964 5696.591 - 5721.797: 1.1354% ( 69) 00:07:54.964 5721.797 - 5747.003: 1.4976% ( 67) 00:07:54.964 5747.003 - 5772.209: 2.0545% ( 103) 00:07:54.964 5772.209 - 5797.415: 2.7141% ( 122) 00:07:54.964 5797.415 - 5822.622: 3.3791% ( 123) 00:07:54.964 5822.622 - 5847.828: 4.1414% ( 141) 00:07:54.964 5847.828 - 5873.034: 5.0443% ( 167) 00:07:54.964 5873.034 - 5898.240: 5.9148% ( 161) 00:07:54.964 5898.240 - 5923.446: 6.8501% ( 173) 00:07:54.964 5923.446 - 5948.652: 7.8071% ( 177) 00:07:54.964 5948.652 - 5973.858: 8.7478% ( 174) 00:07:54.964 5973.858 - 5999.065: 9.7967% ( 194) 00:07:54.964 5999.065 - 6024.271: 10.8131% ( 188) 00:07:54.964 6024.271 - 6049.477: 11.8999% ( 201) 00:07:54.964 6049.477 - 6074.683: 13.0353% ( 210) 00:07:54.964 6074.683 - 6099.889: 14.1003% ( 197) 00:07:54.964 6099.889 - 6125.095: 15.4628% ( 252) 00:07:54.964 6125.095 - 6150.302: 16.6685% ( 223) 00:07:54.964 6150.302 - 6175.508: 18.0958% ( 264) 00:07:54.964 6175.508 - 6200.714: 19.6042% ( 279) 00:07:54.964 6200.714 - 6225.920: 21.2478% ( 304) 00:07:54.964 6225.920 - 6251.126: 23.1618% ( 354) 00:07:54.964 6251.126 - 6276.332: 24.9567% ( 332) 00:07:54.964 6276.332 - 6301.538: 26.7463% ( 331) 00:07:54.964 6301.538 - 6326.745: 28.7359% ( 368) 00:07:54.964 6326.745 - 6351.951: 30.7418% ( 371) 00:07:54.964 6351.951 - 6377.157: 32.6773% ( 358) 00:07:54.964 6377.157 - 6402.363: 34.8616% ( 404) 00:07:54.964 6402.363 - 6427.569: 36.7971% ( 358) 00:07:54.964 6427.569 - 6452.775: 38.7976% ( 370) 00:07:54.964 6452.775 - 6503.188: 42.9660% ( 771) 00:07:54.964 6503.188 - 6553.600: 47.1075% ( 766) 00:07:54.964 6553.600 - 6604.012: 51.2760% ( 771) 00:07:54.964 6604.012 - 6654.425: 55.4282% ( 768) 00:07:54.964 6654.425 - 6704.837: 59.4074% ( 736) 00:07:54.964 6704.837 - 6755.249: 62.9920% ( 663) 00:07:54.964 6755.249 - 6805.662: 66.2359% ( 600) 00:07:54.964 6805.662 - 6856.074: 69.2853% ( 564) 00:07:54.964 6856.074 - 6906.486: 71.9453% ( 492) 00:07:54.964 6906.486 - 6956.898: 74.4431% ( 462) 00:07:54.964 6956.898 - 7007.311: 76.8815% ( 451) 00:07:54.964 7007.311 - 7057.723: 79.2388% ( 436) 00:07:54.964 7057.723 - 7108.135: 81.4122% ( 402) 00:07:54.964 7108.135 - 7158.548: 83.4180% ( 371) 00:07:54.964 7158.548 - 7208.960: 85.2292% ( 335) 00:07:54.964 7208.960 - 7259.372: 86.5430% ( 243) 00:07:54.964 7259.372 - 7309.785: 87.5054% ( 178) 00:07:54.964 7309.785 - 7360.197: 88.1866% ( 126) 00:07:54.964 7360.197 - 7410.609: 88.6624% ( 88) 00:07:54.964 7410.609 - 7461.022: 89.0409% ( 70) 00:07:54.964 7461.022 - 7511.434: 89.3599% ( 59) 00:07:54.964 7511.434 - 7561.846: 89.6086% ( 46) 00:07:54.964 7561.846 - 7612.258: 89.8356% ( 42) 00:07:54.964 7612.258 - 7662.671: 90.0411% ( 38) 00:07:54.964 7662.671 - 7713.083: 90.2087% ( 31) 00:07:54.964 7713.083 - 7763.495: 90.3655% ( 29) 00:07:54.964 7763.495 - 7813.908: 90.5006% ( 25) 00:07:54.964 7813.908 - 7864.320: 90.6412% ( 26) 00:07:54.964 7864.320 - 7914.732: 90.7764% ( 25) 00:07:54.964 7914.732 - 7965.145: 90.8845% ( 20) 00:07:54.964 7965.145 - 8015.557: 90.9926% ( 20) 00:07:54.964 8015.557 - 8065.969: 91.1008% ( 20) 00:07:54.964 8065.969 - 8116.382: 91.2251% ( 23) 00:07:54.964 8116.382 - 8166.794: 91.3495% ( 23) 00:07:54.964 8166.794 - 8217.206: 91.4522% ( 19) 00:07:54.964 8217.206 - 8267.618: 91.5333% ( 15) 00:07:54.964 8267.618 - 8318.031: 91.6306% ( 18) 00:07:54.964 8318.031 - 8368.443: 91.7279% ( 18) 00:07:54.964 8368.443 - 8418.855: 91.8253% ( 18) 00:07:54.964 8418.855 - 8469.268: 91.8901% ( 12) 00:07:54.964 8469.268 - 8519.680: 91.9496% ( 11) 00:07:54.964 8519.680 - 8570.092: 92.0037% ( 10) 00:07:54.964 8570.092 - 8620.505: 92.0956% ( 17) 00:07:54.964 8620.505 - 8670.917: 92.1821% ( 16) 00:07:54.964 8670.917 - 8721.329: 92.2578% ( 14) 00:07:54.964 8721.329 - 8771.742: 92.3659% ( 20) 00:07:54.964 8771.742 - 8822.154: 92.4632% ( 18) 00:07:54.964 8822.154 - 8872.566: 92.5389% ( 14) 00:07:54.964 8872.566 - 8922.978: 92.6362% ( 18) 00:07:54.965 8922.978 - 8973.391: 92.7119% ( 14) 00:07:54.965 8973.391 - 9023.803: 92.7822% ( 13) 00:07:54.965 9023.803 - 9074.215: 92.8579% ( 14) 00:07:54.965 9074.215 - 9124.628: 92.9390% ( 15) 00:07:54.965 9124.628 - 9175.040: 93.0417% ( 19) 00:07:54.965 9175.040 - 9225.452: 93.1553% ( 21) 00:07:54.965 9225.452 - 9275.865: 93.2472% ( 17) 00:07:54.965 9275.865 - 9326.277: 93.3499% ( 19) 00:07:54.965 9326.277 - 9376.689: 93.4472% ( 18) 00:07:54.965 9376.689 - 9427.102: 93.5500% ( 19) 00:07:54.965 9427.102 - 9477.514: 93.6635% ( 21) 00:07:54.965 9477.514 - 9527.926: 93.8095% ( 27) 00:07:54.965 9527.926 - 9578.338: 93.9554% ( 27) 00:07:54.965 9578.338 - 9628.751: 94.1231% ( 31) 00:07:54.965 9628.751 - 9679.163: 94.2636% ( 26) 00:07:54.965 9679.163 - 9729.575: 94.4258% ( 30) 00:07:54.965 9729.575 - 9779.988: 94.5610% ( 25) 00:07:54.965 9779.988 - 9830.400: 94.6745% ( 21) 00:07:54.965 9830.400 - 9880.812: 94.8097% ( 25) 00:07:54.965 9880.812 - 9931.225: 94.9394% ( 24) 00:07:54.965 9931.225 - 9981.637: 95.0368% ( 18) 00:07:54.965 9981.637 - 10032.049: 95.1665% ( 24) 00:07:54.965 10032.049 - 10082.462: 95.2909% ( 23) 00:07:54.965 10082.462 - 10132.874: 95.3882% ( 18) 00:07:54.965 10132.874 - 10183.286: 95.4801% ( 17) 00:07:54.965 10183.286 - 10233.698: 95.5828% ( 19) 00:07:54.965 10233.698 - 10284.111: 95.6801% ( 18) 00:07:54.965 10284.111 - 10334.523: 95.7667% ( 16) 00:07:54.965 10334.523 - 10384.935: 95.8586% ( 17) 00:07:54.965 10384.935 - 10435.348: 95.9451% ( 16) 00:07:54.965 10435.348 - 10485.760: 96.0370% ( 17) 00:07:54.965 10485.760 - 10536.172: 96.0965% ( 11) 00:07:54.965 10536.172 - 10586.585: 96.1830% ( 16) 00:07:54.965 10586.585 - 10636.997: 96.2478% ( 12) 00:07:54.965 10636.997 - 10687.409: 96.3127% ( 12) 00:07:54.965 10687.409 - 10737.822: 96.3884% ( 14) 00:07:54.965 10737.822 - 10788.234: 96.4479% ( 11) 00:07:54.965 10788.234 - 10838.646: 96.5019% ( 10) 00:07:54.965 10838.646 - 10889.058: 96.5506% ( 9) 00:07:54.965 10889.058 - 10939.471: 96.6101% ( 11) 00:07:54.965 10939.471 - 10989.883: 96.6696% ( 11) 00:07:54.965 10989.883 - 11040.295: 96.7561% ( 16) 00:07:54.965 11040.295 - 11090.708: 96.8480% ( 17) 00:07:54.965 11090.708 - 11141.120: 96.9291% ( 15) 00:07:54.965 11141.120 - 11191.532: 97.0318% ( 19) 00:07:54.965 11191.532 - 11241.945: 97.1021% ( 13) 00:07:54.965 11241.945 - 11292.357: 97.1778% ( 14) 00:07:54.965 11292.357 - 11342.769: 97.2589% ( 15) 00:07:54.965 11342.769 - 11393.182: 97.3075% ( 9) 00:07:54.965 11393.182 - 11443.594: 97.3670% ( 11) 00:07:54.965 11443.594 - 11494.006: 97.4373% ( 13) 00:07:54.965 11494.006 - 11544.418: 97.4913% ( 10) 00:07:54.965 11544.418 - 11594.831: 97.5616% ( 13) 00:07:54.965 11594.831 - 11645.243: 97.6103% ( 9) 00:07:54.965 11645.243 - 11695.655: 97.6752% ( 12) 00:07:54.965 11695.655 - 11746.068: 97.7292% ( 10) 00:07:54.965 11746.068 - 11796.480: 97.7887% ( 11) 00:07:54.965 11796.480 - 11846.892: 97.8536% ( 12) 00:07:54.965 11846.892 - 11897.305: 97.9077% ( 10) 00:07:54.965 11897.305 - 11947.717: 97.9779% ( 13) 00:07:54.965 11947.717 - 11998.129: 98.0536% ( 14) 00:07:54.965 11998.129 - 12048.542: 98.1239% ( 13) 00:07:54.965 12048.542 - 12098.954: 98.1726% ( 9) 00:07:54.965 12098.954 - 12149.366: 98.2266% ( 10) 00:07:54.965 12149.366 - 12199.778: 98.2645% ( 7) 00:07:54.965 12199.778 - 12250.191: 98.3131% ( 9) 00:07:54.965 12250.191 - 12300.603: 98.3672% ( 10) 00:07:54.965 12300.603 - 12351.015: 98.4213% ( 10) 00:07:54.965 12351.015 - 12401.428: 98.4753% ( 10) 00:07:54.965 12401.428 - 12451.840: 98.5078% ( 6) 00:07:54.965 12451.840 - 12502.252: 98.5402% ( 6) 00:07:54.965 12502.252 - 12552.665: 98.5619% ( 4) 00:07:54.965 12552.665 - 12603.077: 98.5835% ( 4) 00:07:54.965 12603.077 - 12653.489: 98.6105% ( 5) 00:07:54.965 12653.489 - 12703.902: 98.6321% ( 4) 00:07:54.965 12703.902 - 12754.314: 98.6484% ( 3) 00:07:54.965 12754.314 - 12804.726: 98.6808% ( 6) 00:07:54.965 12804.726 - 12855.138: 98.7078% ( 5) 00:07:54.965 12855.138 - 12905.551: 98.7295% ( 4) 00:07:54.965 12905.551 - 13006.375: 98.7565% ( 5) 00:07:54.965 13006.375 - 13107.200: 98.7781% ( 4) 00:07:54.965 13107.200 - 13208.025: 98.8106% ( 6) 00:07:54.965 13208.025 - 13308.849: 98.8322% ( 4) 00:07:54.965 13308.849 - 13409.674: 98.8646% ( 6) 00:07:54.965 13409.674 - 13510.498: 98.8917% ( 5) 00:07:54.965 13510.498 - 13611.323: 98.9133% ( 4) 00:07:54.965 13611.323 - 13712.148: 98.9619% ( 9) 00:07:54.965 13712.148 - 13812.972: 99.0214% ( 11) 00:07:54.965 13812.972 - 13913.797: 99.0755% ( 10) 00:07:54.965 13913.797 - 14014.622: 99.1404% ( 12) 00:07:54.965 14014.622 - 14115.446: 99.1782% ( 7) 00:07:54.965 14115.446 - 14216.271: 99.2377% ( 11) 00:07:54.965 14216.271 - 14317.095: 99.2971% ( 11) 00:07:54.965 14317.095 - 14417.920: 99.3512% ( 10) 00:07:54.965 14417.920 - 14518.745: 99.4053% ( 10) 00:07:54.965 14518.745 - 14619.569: 99.4485% ( 8) 00:07:54.965 14619.569 - 14720.394: 99.4918% ( 8) 00:07:54.965 14720.394 - 14821.218: 99.5296% ( 7) 00:07:54.965 14821.218 - 14922.043: 99.5513% ( 4) 00:07:54.965 14922.043 - 15022.868: 99.5675% ( 3) 00:07:54.965 15022.868 - 15123.692: 99.5783% ( 2) 00:07:54.965 15123.692 - 15224.517: 99.5945% ( 3) 00:07:54.965 15224.517 - 15325.342: 99.6053% ( 2) 00:07:54.965 15325.342 - 15426.166: 99.6215% ( 3) 00:07:54.965 15426.166 - 15526.991: 99.6378% ( 3) 00:07:54.965 15526.991 - 15627.815: 99.6486% ( 2) 00:07:54.965 15627.815 - 15728.640: 99.6540% ( 1) 00:07:54.965 18350.080 - 18450.905: 99.6648% ( 2) 00:07:54.965 18450.905 - 18551.729: 99.6810% ( 3) 00:07:54.965 18551.729 - 18652.554: 99.7189% ( 7) 00:07:54.965 18652.554 - 18753.378: 99.7513% ( 6) 00:07:54.965 18753.378 - 18854.203: 99.7783% ( 5) 00:07:54.965 18854.203 - 18955.028: 99.8054% ( 5) 00:07:54.965 18955.028 - 19055.852: 99.8324% ( 5) 00:07:54.965 19055.852 - 19156.677: 99.8648% ( 6) 00:07:54.965 19156.677 - 19257.502: 99.8919% ( 5) 00:07:54.965 19257.502 - 19358.326: 99.9243% ( 6) 00:07:54.965 19358.326 - 19459.151: 99.9567% ( 6) 00:07:54.965 19459.151 - 19559.975: 99.9676% ( 2) 00:07:54.965 19660.800 - 19761.625: 99.9892% ( 4) 00:07:54.965 19761.625 - 19862.449: 100.0000% ( 2) 00:07:54.965 00:07:54.965 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:54.965 ============================================================================== 00:07:54.965 Range in us Cumulative IO count 00:07:54.965 4083.397 - 4108.603: 0.0108% ( 2) 00:07:54.965 4108.603 - 4133.809: 0.0162% ( 1) 00:07:54.965 4133.809 - 4159.015: 0.0324% ( 3) 00:07:54.965 4159.015 - 4184.222: 0.0433% ( 2) 00:07:54.965 4184.222 - 4209.428: 0.0541% ( 2) 00:07:54.965 4209.428 - 4234.634: 0.0757% ( 4) 00:07:54.965 4234.634 - 4259.840: 0.0865% ( 2) 00:07:54.965 4259.840 - 4285.046: 0.0973% ( 2) 00:07:54.965 4285.046 - 4310.252: 0.1135% ( 3) 00:07:54.965 4310.252 - 4335.458: 0.1244% ( 2) 00:07:54.965 4335.458 - 4360.665: 0.1406% ( 3) 00:07:54.965 4360.665 - 4385.871: 0.1514% ( 2) 00:07:54.965 4385.871 - 4411.077: 0.1622% ( 2) 00:07:54.965 4411.077 - 4436.283: 0.1730% ( 2) 00:07:54.965 4436.283 - 4461.489: 0.1892% ( 3) 00:07:54.965 4461.489 - 4486.695: 0.2000% ( 2) 00:07:54.965 4486.695 - 4511.902: 0.2109% ( 2) 00:07:54.965 4511.902 - 4537.108: 0.2271% ( 3) 00:07:54.965 4537.108 - 4562.314: 0.2379% ( 2) 00:07:54.965 4562.314 - 4587.520: 0.2487% ( 2) 00:07:54.965 4587.520 - 4612.726: 0.2649% ( 3) 00:07:54.965 4612.726 - 4637.932: 0.2757% ( 2) 00:07:54.965 4637.932 - 4663.138: 0.2920% ( 3) 00:07:54.965 4663.138 - 4688.345: 0.3028% ( 2) 00:07:54.965 4688.345 - 4713.551: 0.3136% ( 2) 00:07:54.965 4713.551 - 4738.757: 0.3298% ( 3) 00:07:54.965 4738.757 - 4763.963: 0.3406% ( 2) 00:07:54.965 4763.963 - 4789.169: 0.3460% ( 1) 00:07:54.965 5444.529 - 5469.735: 0.3514% ( 1) 00:07:54.965 5469.735 - 5494.942: 0.3622% ( 2) 00:07:54.965 5494.942 - 5520.148: 0.3731% ( 2) 00:07:54.965 5520.148 - 5545.354: 0.3839% ( 2) 00:07:54.965 5545.354 - 5570.560: 0.3947% ( 2) 00:07:54.965 5570.560 - 5595.766: 0.4109% ( 3) 00:07:54.965 5595.766 - 5620.972: 0.4271% ( 3) 00:07:54.965 5620.972 - 5646.178: 0.4379% ( 2) 00:07:54.965 5646.178 - 5671.385: 0.4542% ( 3) 00:07:54.965 5671.385 - 5696.591: 0.4704% ( 3) 00:07:54.965 5696.591 - 5721.797: 0.5082% ( 7) 00:07:54.965 5721.797 - 5747.003: 0.6109% ( 19) 00:07:54.965 5747.003 - 5772.209: 0.7948% ( 34) 00:07:54.965 5772.209 - 5797.415: 1.0975% ( 56) 00:07:54.965 5797.415 - 5822.622: 1.5679% ( 87) 00:07:54.965 5822.622 - 5847.828: 2.1951% ( 116) 00:07:54.965 5847.828 - 5873.034: 2.8871% ( 128) 00:07:54.965 5873.034 - 5898.240: 3.7738% ( 164) 00:07:54.965 5898.240 - 5923.446: 4.6064% ( 154) 00:07:54.965 5923.446 - 5948.652: 5.6931% ( 201) 00:07:54.965 5948.652 - 5973.858: 6.7096% ( 188) 00:07:54.965 5973.858 - 5999.065: 7.8936% ( 219) 00:07:54.965 5999.065 - 6024.271: 9.0506% ( 214) 00:07:54.965 6024.271 - 6049.477: 10.1644% ( 206) 00:07:54.965 6049.477 - 6074.683: 11.2889% ( 208) 00:07:54.965 6074.683 - 6099.889: 12.5378% ( 231) 00:07:54.965 6099.889 - 6125.095: 13.8408% ( 241) 00:07:54.965 6125.095 - 6150.302: 15.1546% ( 243) 00:07:54.965 6150.302 - 6175.508: 16.5766% ( 263) 00:07:54.965 6175.508 - 6200.714: 17.9444% ( 253) 00:07:54.965 6200.714 - 6225.920: 19.4420% ( 277) 00:07:54.965 6225.920 - 6251.126: 21.0856% ( 304) 00:07:54.965 6251.126 - 6276.332: 22.8157% ( 320) 00:07:54.965 6276.332 - 6301.538: 24.6378% ( 337) 00:07:54.965 6301.538 - 6326.745: 26.7247% ( 386) 00:07:54.965 6326.745 - 6351.951: 28.9035% ( 403) 00:07:54.965 6351.951 - 6377.157: 31.1202% ( 410) 00:07:54.965 6377.157 - 6402.363: 33.3694% ( 416) 00:07:54.965 6402.363 - 6427.569: 35.7375% ( 438) 00:07:54.965 6427.569 - 6452.775: 38.0569% ( 429) 00:07:54.965 6452.775 - 6503.188: 42.7336% ( 865) 00:07:54.965 6503.188 - 6553.600: 47.7022% ( 919) 00:07:54.965 6553.600 - 6604.012: 52.3843% ( 866) 00:07:54.966 6604.012 - 6654.425: 56.6285% ( 785) 00:07:54.966 6654.425 - 6704.837: 60.5320% ( 722) 00:07:54.966 6704.837 - 6755.249: 64.0841% ( 657) 00:07:54.966 6755.249 - 6805.662: 67.2848% ( 592) 00:07:54.966 6805.662 - 6856.074: 70.2206% ( 543) 00:07:54.966 6856.074 - 6906.486: 72.9509% ( 505) 00:07:54.966 6906.486 - 6956.898: 75.7461% ( 517) 00:07:54.966 6956.898 - 7007.311: 78.4656% ( 503) 00:07:54.966 7007.311 - 7057.723: 80.9580% ( 461) 00:07:54.966 7057.723 - 7108.135: 83.2072% ( 416) 00:07:54.966 7108.135 - 7158.548: 85.0076% ( 333) 00:07:54.966 7158.548 - 7208.960: 86.3106% ( 241) 00:07:54.966 7208.960 - 7259.372: 87.1972% ( 164) 00:07:54.966 7259.372 - 7309.785: 87.7920% ( 110) 00:07:54.966 7309.785 - 7360.197: 88.2029% ( 76) 00:07:54.966 7360.197 - 7410.609: 88.5705% ( 68) 00:07:54.966 7410.609 - 7461.022: 88.8246% ( 47) 00:07:54.966 7461.022 - 7511.434: 89.0733% ( 46) 00:07:54.966 7511.434 - 7561.846: 89.3058% ( 43) 00:07:54.966 7561.846 - 7612.258: 89.5275% ( 41) 00:07:54.966 7612.258 - 7662.671: 89.7113% ( 34) 00:07:54.966 7662.671 - 7713.083: 89.8627% ( 28) 00:07:54.966 7713.083 - 7763.495: 90.0303% ( 31) 00:07:54.966 7763.495 - 7813.908: 90.1979% ( 31) 00:07:54.966 7813.908 - 7864.320: 90.3114% ( 21) 00:07:54.966 7864.320 - 7914.732: 90.4196% ( 20) 00:07:54.966 7914.732 - 7965.145: 90.5277% ( 20) 00:07:54.966 7965.145 - 8015.557: 90.6250% ( 18) 00:07:54.966 8015.557 - 8065.969: 90.7439% ( 22) 00:07:54.966 8065.969 - 8116.382: 90.8737% ( 24) 00:07:54.966 8116.382 - 8166.794: 91.0197% ( 27) 00:07:54.966 8166.794 - 8217.206: 91.1657% ( 27) 00:07:54.966 8217.206 - 8267.618: 91.3008% ( 25) 00:07:54.966 8267.618 - 8318.031: 91.4198% ( 22) 00:07:54.966 8318.031 - 8368.443: 91.5333% ( 21) 00:07:54.966 8368.443 - 8418.855: 91.6306% ( 18) 00:07:54.966 8418.855 - 8469.268: 91.7333% ( 19) 00:07:54.966 8469.268 - 8519.680: 91.8523% ( 22) 00:07:54.966 8519.680 - 8570.092: 91.9658% ( 21) 00:07:54.966 8570.092 - 8620.505: 92.0902% ( 23) 00:07:54.966 8620.505 - 8670.917: 92.2037% ( 21) 00:07:54.966 8670.917 - 8721.329: 92.3227% ( 22) 00:07:54.966 8721.329 - 8771.742: 92.4362% ( 21) 00:07:54.966 8771.742 - 8822.154: 92.5443% ( 20) 00:07:54.966 8822.154 - 8872.566: 92.6579% ( 21) 00:07:54.966 8872.566 - 8922.978: 92.7552% ( 18) 00:07:54.966 8922.978 - 8973.391: 92.8579% ( 19) 00:07:54.966 8973.391 - 9023.803: 92.9552% ( 18) 00:07:54.966 9023.803 - 9074.215: 93.0417% ( 16) 00:07:54.966 9074.215 - 9124.628: 93.1499% ( 20) 00:07:54.966 9124.628 - 9175.040: 93.2580% ( 20) 00:07:54.966 9175.040 - 9225.452: 93.3824% ( 23) 00:07:54.966 9225.452 - 9275.865: 93.5229% ( 26) 00:07:54.966 9275.865 - 9326.277: 93.6527% ( 24) 00:07:54.966 9326.277 - 9376.689: 93.7662% ( 21) 00:07:54.966 9376.689 - 9427.102: 93.9014% ( 25) 00:07:54.966 9427.102 - 9477.514: 94.0420% ( 26) 00:07:54.966 9477.514 - 9527.926: 94.1609% ( 22) 00:07:54.966 9527.926 - 9578.338: 94.2636% ( 19) 00:07:54.966 9578.338 - 9628.751: 94.3501% ( 16) 00:07:54.966 9628.751 - 9679.163: 94.4474% ( 18) 00:07:54.966 9679.163 - 9729.575: 94.5448% ( 18) 00:07:54.966 9729.575 - 9779.988: 94.6367% ( 17) 00:07:54.966 9779.988 - 9830.400: 94.7394% ( 19) 00:07:54.966 9830.400 - 9880.812: 94.8583% ( 22) 00:07:54.966 9880.812 - 9931.225: 95.0151% ( 29) 00:07:54.966 9931.225 - 9981.637: 95.1827% ( 31) 00:07:54.966 9981.637 - 10032.049: 95.3666% ( 34) 00:07:54.966 10032.049 - 10082.462: 95.5179% ( 28) 00:07:54.966 10082.462 - 10132.874: 95.6477% ( 24) 00:07:54.966 10132.874 - 10183.286: 95.7504% ( 19) 00:07:54.966 10183.286 - 10233.698: 95.8532% ( 19) 00:07:54.966 10233.698 - 10284.111: 95.9505% ( 18) 00:07:54.966 10284.111 - 10334.523: 96.0316% ( 15) 00:07:54.966 10334.523 - 10384.935: 96.1181% ( 16) 00:07:54.966 10384.935 - 10435.348: 96.2046% ( 16) 00:07:54.966 10435.348 - 10485.760: 96.2857% ( 15) 00:07:54.966 10485.760 - 10536.172: 96.3614% ( 14) 00:07:54.966 10536.172 - 10586.585: 96.4317% ( 13) 00:07:54.966 10586.585 - 10636.997: 96.5074% ( 14) 00:07:54.966 10636.997 - 10687.409: 96.5776% ( 13) 00:07:54.966 10687.409 - 10737.822: 96.6425% ( 12) 00:07:54.966 10737.822 - 10788.234: 96.6912% ( 9) 00:07:54.966 10788.234 - 10838.646: 96.7398% ( 9) 00:07:54.966 10838.646 - 10889.058: 96.7831% ( 8) 00:07:54.966 10889.058 - 10939.471: 96.8263% ( 8) 00:07:54.966 10939.471 - 10989.883: 96.8750% ( 9) 00:07:54.966 10989.883 - 11040.295: 96.9183% ( 8) 00:07:54.966 11040.295 - 11090.708: 97.0048% ( 16) 00:07:54.966 11090.708 - 11141.120: 97.0859% ( 15) 00:07:54.966 11141.120 - 11191.532: 97.1994% ( 21) 00:07:54.966 11191.532 - 11241.945: 97.3129% ( 21) 00:07:54.966 11241.945 - 11292.357: 97.3940% ( 15) 00:07:54.966 11292.357 - 11342.769: 97.5022% ( 20) 00:07:54.966 11342.769 - 11393.182: 97.5779% ( 14) 00:07:54.966 11393.182 - 11443.594: 97.6644% ( 16) 00:07:54.966 11443.594 - 11494.006: 97.7455% ( 15) 00:07:54.966 11494.006 - 11544.418: 97.8157% ( 13) 00:07:54.966 11544.418 - 11594.831: 97.8752% ( 11) 00:07:54.966 11594.831 - 11645.243: 97.9239% ( 9) 00:07:54.966 11645.243 - 11695.655: 97.9833% ( 11) 00:07:54.966 11695.655 - 11746.068: 98.0428% ( 11) 00:07:54.966 11746.068 - 11796.480: 98.1023% ( 11) 00:07:54.966 11796.480 - 11846.892: 98.1618% ( 11) 00:07:54.966 11846.892 - 11897.305: 98.2158% ( 10) 00:07:54.966 11897.305 - 11947.717: 98.2645% ( 9) 00:07:54.966 11947.717 - 11998.129: 98.3023% ( 7) 00:07:54.966 11998.129 - 12048.542: 98.3240% ( 4) 00:07:54.966 12048.542 - 12098.954: 98.3456% ( 4) 00:07:54.966 12098.954 - 12149.366: 98.3672% ( 4) 00:07:54.966 12149.366 - 12199.778: 98.3834% ( 3) 00:07:54.966 12199.778 - 12250.191: 98.4051% ( 4) 00:07:54.966 12250.191 - 12300.603: 98.4213% ( 3) 00:07:54.966 12300.603 - 12351.015: 98.4429% ( 4) 00:07:54.966 12351.015 - 12401.428: 98.4753% ( 6) 00:07:54.966 12401.428 - 12451.840: 98.5078% ( 6) 00:07:54.966 12451.840 - 12502.252: 98.5348% ( 5) 00:07:54.966 12502.252 - 12552.665: 98.5619% ( 5) 00:07:54.966 12552.665 - 12603.077: 98.5835% ( 4) 00:07:54.966 12603.077 - 12653.489: 98.6159% ( 6) 00:07:54.966 12653.489 - 12703.902: 98.6375% ( 4) 00:07:54.966 12703.902 - 12754.314: 98.6646% ( 5) 00:07:54.966 12754.314 - 12804.726: 98.6916% ( 5) 00:07:54.966 12804.726 - 12855.138: 98.6970% ( 1) 00:07:54.966 12855.138 - 12905.551: 98.7078% ( 2) 00:07:54.966 12905.551 - 13006.375: 98.7349% ( 5) 00:07:54.966 13006.375 - 13107.200: 98.7457% ( 2) 00:07:54.966 13107.200 - 13208.025: 98.7619% ( 3) 00:07:54.966 13208.025 - 13308.849: 98.8051% ( 8) 00:07:54.966 13308.849 - 13409.674: 98.8484% ( 8) 00:07:54.966 13409.674 - 13510.498: 98.8971% ( 9) 00:07:54.966 13510.498 - 13611.323: 98.9457% ( 9) 00:07:54.966 13611.323 - 13712.148: 98.9998% ( 10) 00:07:54.966 13712.148 - 13812.972: 99.1025% ( 19) 00:07:54.966 13812.972 - 13913.797: 99.1836% ( 15) 00:07:54.966 13913.797 - 14014.622: 99.2539% ( 13) 00:07:54.966 14014.622 - 14115.446: 99.3134% ( 11) 00:07:54.966 14115.446 - 14216.271: 99.3837% ( 13) 00:07:54.966 14216.271 - 14317.095: 99.4485% ( 12) 00:07:54.966 14317.095 - 14417.920: 99.4918% ( 8) 00:07:54.966 14417.920 - 14518.745: 99.5296% ( 7) 00:07:54.966 14518.745 - 14619.569: 99.5567% ( 5) 00:07:54.966 14619.569 - 14720.394: 99.5783% ( 4) 00:07:54.966 14720.394 - 14821.218: 99.6053% ( 5) 00:07:54.966 14821.218 - 14922.043: 99.6269% ( 4) 00:07:54.966 14922.043 - 15022.868: 99.6540% ( 5) 00:07:54.966 17946.782 - 18047.606: 99.6810% ( 5) 00:07:54.966 18047.606 - 18148.431: 99.7135% ( 6) 00:07:54.966 18148.431 - 18249.255: 99.7459% ( 6) 00:07:54.966 18249.255 - 18350.080: 99.7783% ( 6) 00:07:54.966 18350.080 - 18450.905: 99.8108% ( 6) 00:07:54.966 18450.905 - 18551.729: 99.8432% ( 6) 00:07:54.966 18551.729 - 18652.554: 99.8756% ( 6) 00:07:54.966 18652.554 - 18753.378: 99.9081% ( 6) 00:07:54.966 18753.378 - 18854.203: 99.9405% ( 6) 00:07:54.966 18854.203 - 18955.028: 99.9730% ( 6) 00:07:54.966 18955.028 - 19055.852: 100.0000% ( 5) 00:07:54.966 00:07:54.966 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:54.966 ============================================================================== 00:07:54.966 Range in us Cumulative IO count 00:07:54.966 3579.274 - 3604.480: 0.0162% ( 3) 00:07:54.966 3604.480 - 3629.686: 0.0595% ( 8) 00:07:54.966 3629.686 - 3654.892: 0.0757% ( 3) 00:07:54.966 3654.892 - 3680.098: 0.0811% ( 1) 00:07:54.966 3680.098 - 3705.305: 0.0865% ( 1) 00:07:54.966 3730.511 - 3755.717: 0.0973% ( 2) 00:07:54.966 3755.717 - 3780.923: 0.1135% ( 3) 00:07:54.966 3780.923 - 3806.129: 0.1244% ( 2) 00:07:54.966 3806.129 - 3831.335: 0.1352% ( 2) 00:07:54.966 3831.335 - 3856.542: 0.1460% ( 2) 00:07:54.966 3856.542 - 3881.748: 0.1568% ( 2) 00:07:54.966 3881.748 - 3906.954: 0.1730% ( 3) 00:07:54.966 3906.954 - 3932.160: 0.1838% ( 2) 00:07:54.966 3932.160 - 3957.366: 0.1946% ( 2) 00:07:54.966 3957.366 - 3982.572: 0.2109% ( 3) 00:07:54.966 3982.572 - 4007.778: 0.2217% ( 2) 00:07:54.966 4007.778 - 4032.985: 0.2325% ( 2) 00:07:54.966 4032.985 - 4058.191: 0.2487% ( 3) 00:07:54.966 4058.191 - 4083.397: 0.2595% ( 2) 00:07:54.966 4083.397 - 4108.603: 0.2703% ( 2) 00:07:54.966 4108.603 - 4133.809: 0.2811% ( 2) 00:07:54.966 4133.809 - 4159.015: 0.2974% ( 3) 00:07:54.966 4159.015 - 4184.222: 0.3082% ( 2) 00:07:54.966 4184.222 - 4209.428: 0.3244% ( 3) 00:07:54.966 4209.428 - 4234.634: 0.3352% ( 2) 00:07:54.966 4234.634 - 4259.840: 0.3460% ( 2) 00:07:54.966 5041.231 - 5066.437: 0.3893% ( 8) 00:07:54.966 5066.437 - 5091.643: 0.4001% ( 2) 00:07:54.966 5091.643 - 5116.849: 0.4109% ( 2) 00:07:54.966 5116.849 - 5142.055: 0.4163% ( 1) 00:07:54.966 5142.055 - 5167.262: 0.4325% ( 3) 00:07:54.966 5167.262 - 5192.468: 0.4433% ( 2) 00:07:54.966 5192.468 - 5217.674: 0.4542% ( 2) 00:07:54.966 5217.674 - 5242.880: 0.4704% ( 3) 00:07:54.966 5242.880 - 5268.086: 0.4812% ( 2) 00:07:54.967 5268.086 - 5293.292: 0.4974% ( 3) 00:07:54.967 5293.292 - 5318.498: 0.5082% ( 2) 00:07:54.967 5318.498 - 5343.705: 0.5190% ( 2) 00:07:54.967 5343.705 - 5368.911: 0.5353% ( 3) 00:07:54.967 5368.911 - 5394.117: 0.5461% ( 2) 00:07:54.967 5394.117 - 5419.323: 0.5569% ( 2) 00:07:54.967 5419.323 - 5444.529: 0.5677% ( 2) 00:07:54.967 5444.529 - 5469.735: 0.5839% ( 3) 00:07:54.967 5469.735 - 5494.942: 0.5947% ( 2) 00:07:54.967 5494.942 - 5520.148: 0.6109% ( 3) 00:07:54.967 5520.148 - 5545.354: 0.6218% ( 2) 00:07:54.967 5545.354 - 5570.560: 0.6326% ( 2) 00:07:54.967 5570.560 - 5595.766: 0.6434% ( 2) 00:07:54.967 5595.766 - 5620.972: 0.6542% ( 2) 00:07:54.967 5620.972 - 5646.178: 0.6704% ( 3) 00:07:54.967 5646.178 - 5671.385: 0.6812% ( 2) 00:07:54.967 5671.385 - 5696.591: 0.6920% ( 2) 00:07:54.967 5696.591 - 5721.797: 0.6974% ( 1) 00:07:54.967 5721.797 - 5747.003: 0.7623% ( 12) 00:07:54.967 5747.003 - 5772.209: 0.9029% ( 26) 00:07:54.967 5772.209 - 5797.415: 1.0921% ( 35) 00:07:54.967 5797.415 - 5822.622: 1.5571% ( 86) 00:07:54.967 5822.622 - 5847.828: 2.3140% ( 140) 00:07:54.967 5847.828 - 5873.034: 3.1953% ( 163) 00:07:54.967 5873.034 - 5898.240: 3.9901% ( 147) 00:07:54.967 5898.240 - 5923.446: 4.8064% ( 151) 00:07:54.967 5923.446 - 5948.652: 5.7256% ( 170) 00:07:54.967 5948.652 - 5973.858: 6.8285% ( 204) 00:07:54.967 5973.858 - 5999.065: 8.0179% ( 220) 00:07:54.967 5999.065 - 6024.271: 9.3101% ( 239) 00:07:54.967 6024.271 - 6049.477: 10.5266% ( 225) 00:07:54.967 6049.477 - 6074.683: 11.6349% ( 205) 00:07:54.967 6074.683 - 6099.889: 12.8082% ( 217) 00:07:54.967 6099.889 - 6125.095: 14.1220% ( 243) 00:07:54.967 6125.095 - 6150.302: 15.4574% ( 247) 00:07:54.967 6150.302 - 6175.508: 16.9064% ( 268) 00:07:54.967 6175.508 - 6200.714: 18.2580% ( 250) 00:07:54.967 6200.714 - 6225.920: 19.8043% ( 286) 00:07:54.967 6225.920 - 6251.126: 21.4046% ( 296) 00:07:54.967 6251.126 - 6276.332: 23.1023% ( 314) 00:07:54.967 6276.332 - 6301.538: 25.0000% ( 351) 00:07:54.967 6301.538 - 6326.745: 26.9247% ( 356) 00:07:54.967 6326.745 - 6351.951: 29.0549% ( 394) 00:07:54.967 6351.951 - 6377.157: 31.3635% ( 427) 00:07:54.967 6377.157 - 6402.363: 33.7046% ( 433) 00:07:54.967 6402.363 - 6427.569: 35.9375% ( 413) 00:07:54.967 6427.569 - 6452.775: 38.3596% ( 448) 00:07:54.967 6452.775 - 6503.188: 43.2580% ( 906) 00:07:54.967 6503.188 - 6553.600: 48.2429% ( 922) 00:07:54.967 6553.600 - 6604.012: 52.9412% ( 869) 00:07:54.967 6604.012 - 6654.425: 57.2340% ( 794) 00:07:54.967 6654.425 - 6704.837: 61.1970% ( 733) 00:07:54.967 6704.837 - 6755.249: 64.6788% ( 644) 00:07:54.967 6755.249 - 6805.662: 67.9174% ( 599) 00:07:54.967 6805.662 - 6856.074: 70.8423% ( 541) 00:07:54.967 6856.074 - 6906.486: 73.5510% ( 501) 00:07:54.967 6906.486 - 6956.898: 76.2219% ( 494) 00:07:54.967 6956.898 - 7007.311: 78.8170% ( 480) 00:07:54.967 7007.311 - 7057.723: 81.1797% ( 437) 00:07:54.967 7057.723 - 7108.135: 83.2288% ( 379) 00:07:54.967 7108.135 - 7158.548: 84.8292% ( 296) 00:07:54.967 7158.548 - 7208.960: 86.0564% ( 227) 00:07:54.967 7208.960 - 7259.372: 86.9053% ( 157) 00:07:54.967 7259.372 - 7309.785: 87.4784% ( 106) 00:07:54.967 7309.785 - 7360.197: 87.8460% ( 68) 00:07:54.967 7360.197 - 7410.609: 88.1866% ( 63) 00:07:54.967 7410.609 - 7461.022: 88.4678% ( 52) 00:07:54.967 7461.022 - 7511.434: 88.7165% ( 46) 00:07:54.967 7511.434 - 7561.846: 88.9381% ( 41) 00:07:54.967 7561.846 - 7612.258: 89.1490% ( 39) 00:07:54.967 7612.258 - 7662.671: 89.3058% ( 29) 00:07:54.967 7662.671 - 7713.083: 89.5004% ( 36) 00:07:54.967 7713.083 - 7763.495: 89.6843% ( 34) 00:07:54.967 7763.495 - 7813.908: 89.8627% ( 33) 00:07:54.967 7813.908 - 7864.320: 90.0303% ( 31) 00:07:54.967 7864.320 - 7914.732: 90.1546% ( 23) 00:07:54.967 7914.732 - 7965.145: 90.2844% ( 24) 00:07:54.967 7965.145 - 8015.557: 90.4196% ( 25) 00:07:54.967 8015.557 - 8065.969: 90.5493% ( 24) 00:07:54.967 8065.969 - 8116.382: 90.7007% ( 28) 00:07:54.967 8116.382 - 8166.794: 90.8575% ( 29) 00:07:54.967 8166.794 - 8217.206: 91.0305% ( 32) 00:07:54.967 8217.206 - 8267.618: 91.1873% ( 29) 00:07:54.967 8267.618 - 8318.031: 91.3333% ( 27) 00:07:54.967 8318.031 - 8368.443: 91.4846% ( 28) 00:07:54.967 8368.443 - 8418.855: 91.6414% ( 29) 00:07:54.967 8418.855 - 8469.268: 91.7874% ( 27) 00:07:54.967 8469.268 - 8519.680: 91.9550% ( 31) 00:07:54.967 8519.680 - 8570.092: 92.1010% ( 27) 00:07:54.967 8570.092 - 8620.505: 92.2686% ( 31) 00:07:54.967 8620.505 - 8670.917: 92.4362% ( 31) 00:07:54.967 8670.917 - 8721.329: 92.5497% ( 21) 00:07:54.967 8721.329 - 8771.742: 92.6741% ( 23) 00:07:54.967 8771.742 - 8822.154: 92.7984% ( 23) 00:07:54.967 8822.154 - 8872.566: 92.9174% ( 22) 00:07:54.967 8872.566 - 8922.978: 93.0471% ( 24) 00:07:54.967 8922.978 - 8973.391: 93.1769% ( 24) 00:07:54.967 8973.391 - 9023.803: 93.3121% ( 25) 00:07:54.967 9023.803 - 9074.215: 93.4256% ( 21) 00:07:54.967 9074.215 - 9124.628: 93.5121% ( 16) 00:07:54.967 9124.628 - 9175.040: 93.6094% ( 18) 00:07:54.967 9175.040 - 9225.452: 93.6905% ( 15) 00:07:54.967 9225.452 - 9275.865: 93.7608% ( 13) 00:07:54.967 9275.865 - 9326.277: 93.8473% ( 16) 00:07:54.967 9326.277 - 9376.689: 93.9500% ( 19) 00:07:54.967 9376.689 - 9427.102: 94.0474% ( 18) 00:07:54.967 9427.102 - 9477.514: 94.1393% ( 17) 00:07:54.967 9477.514 - 9527.926: 94.2366% ( 18) 00:07:54.967 9527.926 - 9578.338: 94.3177% ( 15) 00:07:54.967 9578.338 - 9628.751: 94.4150% ( 18) 00:07:54.967 9628.751 - 9679.163: 94.5177% ( 19) 00:07:54.967 9679.163 - 9729.575: 94.6367% ( 22) 00:07:54.967 9729.575 - 9779.988: 94.7340% ( 18) 00:07:54.967 9779.988 - 9830.400: 94.8259% ( 17) 00:07:54.967 9830.400 - 9880.812: 94.9232% ( 18) 00:07:54.967 9880.812 - 9931.225: 95.0314% ( 20) 00:07:54.967 9931.225 - 9981.637: 95.1233% ( 17) 00:07:54.967 9981.637 - 10032.049: 95.2260% ( 19) 00:07:54.967 10032.049 - 10082.462: 95.3503% ( 23) 00:07:54.967 10082.462 - 10132.874: 95.4531% ( 19) 00:07:54.967 10132.874 - 10183.286: 95.5666% ( 21) 00:07:54.967 10183.286 - 10233.698: 95.6369% ( 13) 00:07:54.967 10233.698 - 10284.111: 95.7396% ( 19) 00:07:54.967 10284.111 - 10334.523: 95.8261% ( 16) 00:07:54.967 10334.523 - 10384.935: 95.9018% ( 14) 00:07:54.967 10384.935 - 10435.348: 95.9883% ( 16) 00:07:54.967 10435.348 - 10485.760: 96.0748% ( 16) 00:07:54.967 10485.760 - 10536.172: 96.1451% ( 13) 00:07:54.967 10536.172 - 10586.585: 96.2208% ( 14) 00:07:54.967 10586.585 - 10636.997: 96.3019% ( 15) 00:07:54.967 10636.997 - 10687.409: 96.4046% ( 19) 00:07:54.967 10687.409 - 10737.822: 96.4857% ( 15) 00:07:54.967 10737.822 - 10788.234: 96.5776% ( 17) 00:07:54.967 10788.234 - 10838.646: 96.6750% ( 18) 00:07:54.967 10838.646 - 10889.058: 96.7344% ( 11) 00:07:54.967 10889.058 - 10939.471: 96.7939% ( 11) 00:07:54.967 10939.471 - 10989.883: 96.8642% ( 13) 00:07:54.967 10989.883 - 11040.295: 96.9399% ( 14) 00:07:54.967 11040.295 - 11090.708: 97.0102% ( 13) 00:07:54.967 11090.708 - 11141.120: 97.0859% ( 14) 00:07:54.967 11141.120 - 11191.532: 97.1399% ( 10) 00:07:54.967 11191.532 - 11241.945: 97.2048% ( 12) 00:07:54.967 11241.945 - 11292.357: 97.2805% ( 14) 00:07:54.967 11292.357 - 11342.769: 97.4048% ( 23) 00:07:54.967 11342.769 - 11393.182: 97.4805% ( 14) 00:07:54.967 11393.182 - 11443.594: 97.5454% ( 12) 00:07:54.967 11443.594 - 11494.006: 97.6211% ( 14) 00:07:54.967 11494.006 - 11544.418: 97.6914% ( 13) 00:07:54.967 11544.418 - 11594.831: 97.7617% ( 13) 00:07:54.967 11594.831 - 11645.243: 97.8266% ( 12) 00:07:54.967 11645.243 - 11695.655: 97.8752% ( 9) 00:07:54.967 11695.655 - 11746.068: 97.9401% ( 12) 00:07:54.967 11746.068 - 11796.480: 97.9833% ( 8) 00:07:54.967 11796.480 - 11846.892: 98.0428% ( 11) 00:07:54.967 11846.892 - 11897.305: 98.1023% ( 11) 00:07:54.967 11897.305 - 11947.717: 98.1564% ( 10) 00:07:54.967 11947.717 - 11998.129: 98.2158% ( 11) 00:07:54.967 11998.129 - 12048.542: 98.2753% ( 11) 00:07:54.967 12048.542 - 12098.954: 98.3186% ( 8) 00:07:54.967 12098.954 - 12149.366: 98.3618% ( 8) 00:07:54.967 12149.366 - 12199.778: 98.3942% ( 6) 00:07:54.967 12199.778 - 12250.191: 98.4375% ( 8) 00:07:54.967 12250.191 - 12300.603: 98.4753% ( 7) 00:07:54.967 12300.603 - 12351.015: 98.5186% ( 8) 00:07:54.967 12351.015 - 12401.428: 98.5402% ( 4) 00:07:54.967 12401.428 - 12451.840: 98.5564% ( 3) 00:07:54.968 12451.840 - 12502.252: 98.5781% ( 4) 00:07:54.968 12502.252 - 12552.665: 98.5997% ( 4) 00:07:54.968 12552.665 - 12603.077: 98.6213% ( 4) 00:07:54.968 12603.077 - 12653.489: 98.6484% ( 5) 00:07:54.968 12703.902 - 12754.314: 98.6538% ( 1) 00:07:54.968 12754.314 - 12804.726: 98.6592% ( 1) 00:07:54.968 12804.726 - 12855.138: 98.6700% ( 2) 00:07:54.968 12855.138 - 12905.551: 98.6754% ( 1) 00:07:54.968 12905.551 - 13006.375: 98.6970% ( 4) 00:07:54.968 13006.375 - 13107.200: 98.7132% ( 3) 00:07:54.968 13107.200 - 13208.025: 98.7511% ( 7) 00:07:54.968 13208.025 - 13308.849: 98.8051% ( 10) 00:07:54.968 13308.849 - 13409.674: 98.8538% ( 9) 00:07:54.968 13409.674 - 13510.498: 98.9025% ( 9) 00:07:54.968 13510.498 - 13611.323: 98.9511% ( 9) 00:07:54.968 13611.323 - 13712.148: 99.0484% ( 18) 00:07:54.968 13712.148 - 13812.972: 99.1133% ( 12) 00:07:54.968 13812.972 - 13913.797: 99.1890% ( 14) 00:07:54.968 13913.797 - 14014.622: 99.2593% ( 13) 00:07:54.968 14014.622 - 14115.446: 99.3350% ( 14) 00:07:54.968 14115.446 - 14216.271: 99.3999% ( 12) 00:07:54.968 14216.271 - 14317.095: 99.4377% ( 7) 00:07:54.968 14317.095 - 14417.920: 99.4810% ( 8) 00:07:54.968 14417.920 - 14518.745: 99.5242% ( 8) 00:07:54.968 14518.745 - 14619.569: 99.5675% ( 8) 00:07:54.968 14619.569 - 14720.394: 99.5945% ( 5) 00:07:54.968 14720.394 - 14821.218: 99.6215% ( 5) 00:07:54.968 14821.218 - 14922.043: 99.6486% ( 5) 00:07:54.968 14922.043 - 15022.868: 99.6540% ( 1) 00:07:54.968 17644.308 - 17745.132: 99.6594% ( 1) 00:07:54.968 17745.132 - 17845.957: 99.6864% ( 5) 00:07:54.968 17845.957 - 17946.782: 99.7135% ( 5) 00:07:54.968 17946.782 - 18047.606: 99.7459% ( 6) 00:07:54.968 18047.606 - 18148.431: 99.7837% ( 7) 00:07:54.968 18148.431 - 18249.255: 99.8108% ( 5) 00:07:54.968 18249.255 - 18350.080: 99.8486% ( 7) 00:07:54.968 18350.080 - 18450.905: 99.8811% ( 6) 00:07:54.968 18450.905 - 18551.729: 99.9081% ( 5) 00:07:54.968 18551.729 - 18652.554: 99.9405% ( 6) 00:07:54.968 18652.554 - 18753.378: 99.9730% ( 6) 00:07:54.968 18753.378 - 18854.203: 100.0000% ( 5) 00:07:54.968 00:07:54.968 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:54.968 ============================================================================== 00:07:54.968 Range in us Cumulative IO count 00:07:54.968 3377.625 - 3402.831: 0.0216% ( 4) 00:07:54.968 3402.831 - 3428.037: 0.0270% ( 1) 00:07:54.968 3428.037 - 3453.243: 0.0433% ( 3) 00:07:54.968 3453.243 - 3478.449: 0.0541% ( 2) 00:07:54.968 3478.449 - 3503.655: 0.0703% ( 3) 00:07:54.968 3503.655 - 3528.862: 0.0811% ( 2) 00:07:54.968 3528.862 - 3554.068: 0.0919% ( 2) 00:07:54.968 3554.068 - 3579.274: 0.1081% ( 3) 00:07:54.968 3579.274 - 3604.480: 0.1189% ( 2) 00:07:54.968 3604.480 - 3629.686: 0.1298% ( 2) 00:07:54.968 3629.686 - 3654.892: 0.1460% ( 3) 00:07:54.968 3654.892 - 3680.098: 0.1568% ( 2) 00:07:54.968 3680.098 - 3705.305: 0.1730% ( 3) 00:07:54.968 3705.305 - 3730.511: 0.1838% ( 2) 00:07:54.968 3730.511 - 3755.717: 0.1946% ( 2) 00:07:54.968 3755.717 - 3780.923: 0.2109% ( 3) 00:07:54.968 3780.923 - 3806.129: 0.2217% ( 2) 00:07:54.968 3806.129 - 3831.335: 0.2379% ( 3) 00:07:54.968 3831.335 - 3856.542: 0.2487% ( 2) 00:07:54.968 3856.542 - 3881.748: 0.2595% ( 2) 00:07:54.968 3881.748 - 3906.954: 0.2757% ( 3) 00:07:54.968 3906.954 - 3932.160: 0.2865% ( 2) 00:07:54.968 3932.160 - 3957.366: 0.2974% ( 2) 00:07:54.968 3957.366 - 3982.572: 0.3136% ( 3) 00:07:54.968 3982.572 - 4007.778: 0.3244% ( 2) 00:07:54.968 4007.778 - 4032.985: 0.3352% ( 2) 00:07:54.968 4032.985 - 4058.191: 0.3460% ( 2) 00:07:54.968 4839.582 - 4864.788: 0.3568% ( 2) 00:07:54.968 4864.788 - 4889.994: 0.3785% ( 4) 00:07:54.968 4889.994 - 4915.200: 0.3839% ( 1) 00:07:54.968 4915.200 - 4940.406: 0.3947% ( 2) 00:07:54.968 4940.406 - 4965.612: 0.4109% ( 3) 00:07:54.968 4965.612 - 4990.818: 0.4217% ( 2) 00:07:54.968 4990.818 - 5016.025: 0.4325% ( 2) 00:07:54.968 5016.025 - 5041.231: 0.4487% ( 3) 00:07:54.968 5041.231 - 5066.437: 0.4596% ( 2) 00:07:54.968 5066.437 - 5091.643: 0.4758% ( 3) 00:07:54.968 5091.643 - 5116.849: 0.4866% ( 2) 00:07:54.968 5116.849 - 5142.055: 0.4974% ( 2) 00:07:54.968 5142.055 - 5167.262: 0.5136% ( 3) 00:07:54.968 5167.262 - 5192.468: 0.5244% ( 2) 00:07:54.968 5192.468 - 5217.674: 0.5353% ( 2) 00:07:54.968 5217.674 - 5242.880: 0.5515% ( 3) 00:07:54.968 5242.880 - 5268.086: 0.5623% ( 2) 00:07:54.968 5268.086 - 5293.292: 0.5785% ( 3) 00:07:54.968 5293.292 - 5318.498: 0.5893% ( 2) 00:07:54.968 5318.498 - 5343.705: 0.6001% ( 2) 00:07:54.968 5343.705 - 5368.911: 0.6163% ( 3) 00:07:54.968 5368.911 - 5394.117: 0.6272% ( 2) 00:07:54.968 5394.117 - 5419.323: 0.6434% ( 3) 00:07:54.968 5419.323 - 5444.529: 0.6542% ( 2) 00:07:54.968 5444.529 - 5469.735: 0.6650% ( 2) 00:07:54.968 5469.735 - 5494.942: 0.6812% ( 3) 00:07:54.968 5494.942 - 5520.148: 0.6920% ( 2) 00:07:54.968 5721.797 - 5747.003: 0.7840% ( 17) 00:07:54.968 5747.003 - 5772.209: 0.9678% ( 34) 00:07:54.968 5772.209 - 5797.415: 1.2976% ( 61) 00:07:54.968 5797.415 - 5822.622: 1.7247% ( 79) 00:07:54.968 5822.622 - 5847.828: 2.3248% ( 111) 00:07:54.968 5847.828 - 5873.034: 3.1520% ( 153) 00:07:54.968 5873.034 - 5898.240: 3.9792% ( 153) 00:07:54.968 5898.240 - 5923.446: 4.7794% ( 148) 00:07:54.968 5923.446 - 5948.652: 5.7256% ( 175) 00:07:54.968 5948.652 - 5973.858: 6.9474% ( 226) 00:07:54.968 5973.858 - 5999.065: 8.0666% ( 207) 00:07:54.968 5999.065 - 6024.271: 9.2452% ( 218) 00:07:54.968 6024.271 - 6049.477: 10.3914% ( 212) 00:07:54.968 6049.477 - 6074.683: 11.6025% ( 224) 00:07:54.968 6074.683 - 6099.889: 12.7595% ( 214) 00:07:54.968 6099.889 - 6125.095: 14.0192% ( 233) 00:07:54.968 6125.095 - 6150.302: 15.2790% ( 233) 00:07:54.968 6150.302 - 6175.508: 16.6577% ( 255) 00:07:54.968 6175.508 - 6200.714: 18.1228% ( 271) 00:07:54.968 6200.714 - 6225.920: 19.7232% ( 296) 00:07:54.968 6225.920 - 6251.126: 21.2803% ( 288) 00:07:54.968 6251.126 - 6276.332: 23.0861% ( 334) 00:07:54.968 6276.332 - 6301.538: 25.0378% ( 361) 00:07:54.968 6301.538 - 6326.745: 27.1572% ( 392) 00:07:54.968 6326.745 - 6351.951: 29.3739% ( 410) 00:07:54.968 6351.951 - 6377.157: 31.6122% ( 414) 00:07:54.968 6377.157 - 6402.363: 33.7587% ( 397) 00:07:54.968 6402.363 - 6427.569: 36.1592% ( 444) 00:07:54.968 6427.569 - 6452.775: 38.5381% ( 440) 00:07:54.968 6452.775 - 6503.188: 43.2904% ( 879) 00:07:54.968 6503.188 - 6553.600: 48.0969% ( 889) 00:07:54.968 6553.600 - 6604.012: 52.9087% ( 890) 00:07:54.968 6604.012 - 6654.425: 57.2286% ( 799) 00:07:54.968 6654.425 - 6704.837: 61.1105% ( 718) 00:07:54.968 6704.837 - 6755.249: 64.6140% ( 648) 00:07:54.968 6755.249 - 6805.662: 67.8795% ( 604) 00:07:54.968 6805.662 - 6856.074: 70.7991% ( 540) 00:07:54.968 6856.074 - 6906.486: 73.5781% ( 514) 00:07:54.968 6906.486 - 6956.898: 76.2868% ( 501) 00:07:54.968 6956.898 - 7007.311: 78.9198% ( 487) 00:07:54.968 7007.311 - 7057.723: 81.2878% ( 438) 00:07:54.968 7057.723 - 7108.135: 83.3748% ( 386) 00:07:54.968 7108.135 - 7158.548: 85.0292% ( 306) 00:07:54.968 7158.548 - 7208.960: 86.2403% ( 224) 00:07:54.968 7208.960 - 7259.372: 87.1215% ( 163) 00:07:54.968 7259.372 - 7309.785: 87.6730% ( 102) 00:07:54.968 7309.785 - 7360.197: 88.0461% ( 69) 00:07:54.968 7360.197 - 7410.609: 88.4083% ( 67) 00:07:54.968 7410.609 - 7461.022: 88.6570% ( 46) 00:07:54.968 7461.022 - 7511.434: 88.8679% ( 39) 00:07:54.968 7511.434 - 7561.846: 89.0625% ( 36) 00:07:54.968 7561.846 - 7612.258: 89.2571% ( 36) 00:07:54.968 7612.258 - 7662.671: 89.3923% ( 25) 00:07:54.968 7662.671 - 7713.083: 89.5491% ( 29) 00:07:54.968 7713.083 - 7763.495: 89.6626% ( 21) 00:07:54.968 7763.495 - 7813.908: 89.7816% ( 22) 00:07:54.968 7813.908 - 7864.320: 89.9059% ( 23) 00:07:54.968 7864.320 - 7914.732: 90.0303% ( 23) 00:07:54.968 7914.732 - 7965.145: 90.1438% ( 21) 00:07:54.968 7965.145 - 8015.557: 90.2952% ( 28) 00:07:54.968 8015.557 - 8065.969: 90.4087% ( 21) 00:07:54.968 8065.969 - 8116.382: 90.5385% ( 24) 00:07:54.968 8116.382 - 8166.794: 90.6412% ( 19) 00:07:54.968 8166.794 - 8217.206: 90.7710% ( 24) 00:07:54.968 8217.206 - 8267.618: 90.8953% ( 23) 00:07:54.968 8267.618 - 8318.031: 91.0251% ( 24) 00:07:54.968 8318.031 - 8368.443: 91.1548% ( 24) 00:07:54.968 8368.443 - 8418.855: 91.3008% ( 27) 00:07:54.968 8418.855 - 8469.268: 91.4846% ( 34) 00:07:54.968 8469.268 - 8519.680: 91.6631% ( 33) 00:07:54.968 8519.680 - 8570.092: 91.8144% ( 28) 00:07:54.968 8570.092 - 8620.505: 91.9766% ( 30) 00:07:54.968 8620.505 - 8670.917: 92.1551% ( 33) 00:07:54.968 8670.917 - 8721.329: 92.3443% ( 35) 00:07:54.968 8721.329 - 8771.742: 92.5119% ( 31) 00:07:54.968 8771.742 - 8822.154: 92.6795% ( 31) 00:07:54.968 8822.154 - 8872.566: 92.8633% ( 34) 00:07:54.968 8872.566 - 8922.978: 93.0417% ( 33) 00:07:54.968 8922.978 - 8973.391: 93.2093% ( 31) 00:07:54.968 8973.391 - 9023.803: 93.3986% ( 35) 00:07:54.968 9023.803 - 9074.215: 93.5824% ( 34) 00:07:54.968 9074.215 - 9124.628: 93.7338% ( 28) 00:07:54.968 9124.628 - 9175.040: 93.8852% ( 28) 00:07:54.968 9175.040 - 9225.452: 94.0203% ( 25) 00:07:54.968 9225.452 - 9275.865: 94.1231% ( 19) 00:07:54.968 9275.865 - 9326.277: 94.2258% ( 19) 00:07:54.968 9326.277 - 9376.689: 94.3393% ( 21) 00:07:54.968 9376.689 - 9427.102: 94.4366% ( 18) 00:07:54.968 9427.102 - 9477.514: 94.5502% ( 21) 00:07:54.968 9477.514 - 9527.926: 94.6421% ( 17) 00:07:54.968 9527.926 - 9578.338: 94.7232% ( 15) 00:07:54.968 9578.338 - 9628.751: 94.7989% ( 14) 00:07:54.968 9628.751 - 9679.163: 94.8692% ( 13) 00:07:54.968 9679.163 - 9729.575: 94.9449% ( 14) 00:07:54.968 9729.575 - 9779.988: 95.0151% ( 13) 00:07:54.968 9779.988 - 9830.400: 95.0908% ( 14) 00:07:54.968 9830.400 - 9880.812: 95.1503% ( 11) 00:07:54.968 9880.812 - 9931.225: 95.2152% ( 12) 00:07:54.969 9931.225 - 9981.637: 95.2747% ( 11) 00:07:54.969 9981.637 - 10032.049: 95.3341% ( 11) 00:07:54.969 10032.049 - 10082.462: 95.3936% ( 11) 00:07:54.969 10082.462 - 10132.874: 95.4693% ( 14) 00:07:54.969 10132.874 - 10183.286: 95.5720% ( 19) 00:07:54.969 10183.286 - 10233.698: 95.6531% ( 15) 00:07:54.969 10233.698 - 10284.111: 95.7072% ( 10) 00:07:54.969 10284.111 - 10334.523: 95.7612% ( 10) 00:07:54.969 10334.523 - 10384.935: 95.8423% ( 15) 00:07:54.969 10384.935 - 10435.348: 95.9180% ( 14) 00:07:54.969 10435.348 - 10485.760: 95.9937% ( 14) 00:07:54.969 10485.760 - 10536.172: 96.0586% ( 12) 00:07:54.969 10536.172 - 10586.585: 96.1073% ( 9) 00:07:54.969 10586.585 - 10636.997: 96.1667% ( 11) 00:07:54.969 10636.997 - 10687.409: 96.2262% ( 11) 00:07:54.969 10687.409 - 10737.822: 96.2749% ( 9) 00:07:54.969 10737.822 - 10788.234: 96.3181% ( 8) 00:07:54.969 10788.234 - 10838.646: 96.3776% ( 11) 00:07:54.969 10838.646 - 10889.058: 96.4641% ( 16) 00:07:54.969 10889.058 - 10939.471: 96.5668% ( 19) 00:07:54.969 10939.471 - 10989.883: 96.6047% ( 7) 00:07:54.969 10989.883 - 11040.295: 96.6479% ( 8) 00:07:54.969 11040.295 - 11090.708: 96.7236% ( 14) 00:07:54.969 11090.708 - 11141.120: 96.8480% ( 23) 00:07:54.969 11141.120 - 11191.532: 96.9020% ( 10) 00:07:54.969 11191.532 - 11241.945: 96.9507% ( 9) 00:07:54.969 11241.945 - 11292.357: 97.0048% ( 10) 00:07:54.969 11292.357 - 11342.769: 97.1021% ( 18) 00:07:54.969 11342.769 - 11393.182: 97.1940% ( 17) 00:07:54.969 11393.182 - 11443.594: 97.2589% ( 12) 00:07:54.969 11443.594 - 11494.006: 97.3508% ( 17) 00:07:54.969 11494.006 - 11544.418: 97.4481% ( 18) 00:07:54.969 11544.418 - 11594.831: 97.5400% ( 17) 00:07:54.969 11594.831 - 11645.243: 97.6481% ( 20) 00:07:54.969 11645.243 - 11695.655: 97.7671% ( 22) 00:07:54.969 11695.655 - 11746.068: 97.8590% ( 17) 00:07:54.969 11746.068 - 11796.480: 97.9509% ( 17) 00:07:54.969 11796.480 - 11846.892: 98.0428% ( 17) 00:07:54.969 11846.892 - 11897.305: 98.1185% ( 14) 00:07:54.969 11897.305 - 11947.717: 98.1942% ( 14) 00:07:54.969 11947.717 - 11998.129: 98.2591% ( 12) 00:07:54.969 11998.129 - 12048.542: 98.3348% ( 14) 00:07:54.969 12048.542 - 12098.954: 98.3942% ( 11) 00:07:54.969 12098.954 - 12149.366: 98.4483% ( 10) 00:07:54.969 12149.366 - 12199.778: 98.4916% ( 8) 00:07:54.969 12199.778 - 12250.191: 98.5240% ( 6) 00:07:54.969 12250.191 - 12300.603: 98.5673% ( 8) 00:07:54.969 12300.603 - 12351.015: 98.5889% ( 4) 00:07:54.969 12351.015 - 12401.428: 98.6105% ( 4) 00:07:54.969 12401.428 - 12451.840: 98.6159% ( 1) 00:07:54.969 12603.077 - 12653.489: 98.6321% ( 3) 00:07:54.969 12653.489 - 12703.902: 98.6484% ( 3) 00:07:54.969 12703.902 - 12754.314: 98.6700% ( 4) 00:07:54.969 12754.314 - 12804.726: 98.6916% ( 4) 00:07:54.969 12804.726 - 12855.138: 98.7186% ( 5) 00:07:54.969 12855.138 - 12905.551: 98.7511% ( 6) 00:07:54.969 12905.551 - 13006.375: 98.8268% ( 14) 00:07:54.969 13006.375 - 13107.200: 98.8917% ( 12) 00:07:54.969 13107.200 - 13208.025: 98.9728% ( 15) 00:07:54.969 13208.025 - 13308.849: 99.0430% ( 13) 00:07:54.969 13308.849 - 13409.674: 99.1133% ( 13) 00:07:54.969 13409.674 - 13510.498: 99.1836% ( 13) 00:07:54.969 13510.498 - 13611.323: 99.2539% ( 13) 00:07:54.969 13611.323 - 13712.148: 99.3026% ( 9) 00:07:54.969 13712.148 - 13812.972: 99.3512% ( 9) 00:07:54.969 13812.972 - 13913.797: 99.3945% ( 8) 00:07:54.969 13913.797 - 14014.622: 99.4377% ( 8) 00:07:54.969 14014.622 - 14115.446: 99.4756% ( 7) 00:07:54.969 14115.446 - 14216.271: 99.5188% ( 8) 00:07:54.969 14216.271 - 14317.095: 99.5621% ( 8) 00:07:54.969 14317.095 - 14417.920: 99.5891% ( 5) 00:07:54.969 14417.920 - 14518.745: 99.6053% ( 3) 00:07:54.969 14518.745 - 14619.569: 99.6161% ( 2) 00:07:54.969 14619.569 - 14720.394: 99.6324% ( 3) 00:07:54.969 14720.394 - 14821.218: 99.6486% ( 3) 00:07:54.969 14821.218 - 14922.043: 99.6540% ( 1) 00:07:54.969 17039.360 - 17140.185: 99.6594% ( 1) 00:07:54.969 17140.185 - 17241.009: 99.6810% ( 4) 00:07:54.969 17241.009 - 17341.834: 99.7189% ( 7) 00:07:54.969 17341.834 - 17442.658: 99.7513% ( 6) 00:07:54.969 17442.658 - 17543.483: 99.7837% ( 6) 00:07:54.969 17543.483 - 17644.308: 99.8162% ( 6) 00:07:54.969 17644.308 - 17745.132: 99.8486% ( 6) 00:07:54.969 17745.132 - 17845.957: 99.8648% ( 3) 00:07:54.969 17845.957 - 17946.782: 99.9027% ( 7) 00:07:54.969 17946.782 - 18047.606: 99.9351% ( 6) 00:07:54.969 18047.606 - 18148.431: 99.9676% ( 6) 00:07:54.969 18148.431 - 18249.255: 100.0000% ( 6) 00:07:54.969 00:07:54.969 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:54.969 ============================================================================== 00:07:54.969 Range in us Cumulative IO count 00:07:54.969 3226.388 - 3251.594: 0.0162% ( 3) 00:07:54.969 3251.594 - 3276.800: 0.0541% ( 7) 00:07:54.969 3276.800 - 3302.006: 0.0703% ( 3) 00:07:54.969 3302.006 - 3327.212: 0.0757% ( 1) 00:07:54.969 3352.418 - 3377.625: 0.0919% ( 3) 00:07:54.969 3377.625 - 3402.831: 0.0973% ( 1) 00:07:54.969 3402.831 - 3428.037: 0.1244% ( 5) 00:07:54.969 3428.037 - 3453.243: 0.1298% ( 1) 00:07:54.969 3453.243 - 3478.449: 0.1514% ( 4) 00:07:54.969 3478.449 - 3503.655: 0.1568% ( 1) 00:07:54.969 3503.655 - 3528.862: 0.1676% ( 2) 00:07:54.969 3528.862 - 3554.068: 0.1784% ( 2) 00:07:54.969 3554.068 - 3579.274: 0.1892% ( 2) 00:07:54.969 3579.274 - 3604.480: 0.2000% ( 2) 00:07:54.969 3604.480 - 3629.686: 0.2163% ( 3) 00:07:54.969 3629.686 - 3654.892: 0.2271% ( 2) 00:07:54.969 3654.892 - 3680.098: 0.2433% ( 3) 00:07:54.969 3680.098 - 3705.305: 0.2541% ( 2) 00:07:54.969 3705.305 - 3730.511: 0.2703% ( 3) 00:07:54.969 3730.511 - 3755.717: 0.2811% ( 2) 00:07:54.969 3755.717 - 3780.923: 0.2920% ( 2) 00:07:54.969 3780.923 - 3806.129: 0.3082% ( 3) 00:07:54.969 3806.129 - 3831.335: 0.3190% ( 2) 00:07:54.969 3831.335 - 3856.542: 0.3298% ( 2) 00:07:54.969 3856.542 - 3881.748: 0.3460% ( 3) 00:07:54.969 4663.138 - 4688.345: 0.3622% ( 3) 00:07:54.969 4688.345 - 4713.551: 0.3785% ( 3) 00:07:54.969 4713.551 - 4738.757: 0.3839% ( 1) 00:07:54.969 4738.757 - 4763.963: 0.4001% ( 3) 00:07:54.969 4763.963 - 4789.169: 0.4109% ( 2) 00:07:54.969 4789.169 - 4814.375: 0.4217% ( 2) 00:07:54.969 4814.375 - 4839.582: 0.4379% ( 3) 00:07:54.969 4839.582 - 4864.788: 0.4487% ( 2) 00:07:54.969 4864.788 - 4889.994: 0.4596% ( 2) 00:07:54.969 4889.994 - 4915.200: 0.4704% ( 2) 00:07:54.969 4915.200 - 4940.406: 0.4866% ( 3) 00:07:54.969 4940.406 - 4965.612: 0.4974% ( 2) 00:07:54.969 4965.612 - 4990.818: 0.5136% ( 3) 00:07:54.969 4990.818 - 5016.025: 0.5244% ( 2) 00:07:54.969 5016.025 - 5041.231: 0.5353% ( 2) 00:07:54.969 5041.231 - 5066.437: 0.5515% ( 3) 00:07:54.969 5066.437 - 5091.643: 0.5623% ( 2) 00:07:54.969 5091.643 - 5116.849: 0.5785% ( 3) 00:07:54.969 5116.849 - 5142.055: 0.5893% ( 2) 00:07:54.969 5142.055 - 5167.262: 0.6001% ( 2) 00:07:54.969 5167.262 - 5192.468: 0.6163% ( 3) 00:07:54.969 5192.468 - 5217.674: 0.6272% ( 2) 00:07:54.969 5217.674 - 5242.880: 0.6380% ( 2) 00:07:54.969 5242.880 - 5268.086: 0.6488% ( 2) 00:07:54.969 5268.086 - 5293.292: 0.6650% ( 3) 00:07:54.969 5293.292 - 5318.498: 0.6758% ( 2) 00:07:54.969 5318.498 - 5343.705: 0.6866% ( 2) 00:07:54.969 5343.705 - 5368.911: 0.6920% ( 1) 00:07:54.969 5671.385 - 5696.591: 0.7029% ( 2) 00:07:54.969 5696.591 - 5721.797: 0.7461% ( 8) 00:07:54.969 5721.797 - 5747.003: 0.8218% ( 14) 00:07:54.969 5747.003 - 5772.209: 0.9624% ( 26) 00:07:54.969 5772.209 - 5797.415: 1.2922% ( 61) 00:07:54.969 5797.415 - 5822.622: 1.8166% ( 97) 00:07:54.969 5822.622 - 5847.828: 2.6006% ( 145) 00:07:54.969 5847.828 - 5873.034: 3.2764% ( 125) 00:07:54.969 5873.034 - 5898.240: 4.0657% ( 146) 00:07:54.969 5898.240 - 5923.446: 4.9200% ( 158) 00:07:54.969 5923.446 - 5948.652: 5.8986% ( 181) 00:07:54.969 5948.652 - 5973.858: 6.8609% ( 178) 00:07:54.969 5973.858 - 5999.065: 7.9044% ( 193) 00:07:54.969 5999.065 - 6024.271: 9.1371% ( 228) 00:07:54.969 6024.271 - 6049.477: 10.3968% ( 233) 00:07:54.969 6049.477 - 6074.683: 11.6241% ( 227) 00:07:54.969 6074.683 - 6099.889: 12.8514% ( 227) 00:07:54.969 6099.889 - 6125.095: 14.1112% ( 233) 00:07:54.969 6125.095 - 6150.302: 15.4682% ( 251) 00:07:54.969 6150.302 - 6175.508: 16.8415% ( 254) 00:07:54.969 6175.508 - 6200.714: 18.3175% ( 273) 00:07:54.969 6200.714 - 6225.920: 19.8367% ( 281) 00:07:54.969 6225.920 - 6251.126: 21.4154% ( 292) 00:07:54.969 6251.126 - 6276.332: 23.1239% ( 316) 00:07:54.969 6276.332 - 6301.538: 25.1298% ( 371) 00:07:54.969 6301.538 - 6326.745: 27.2113% ( 385) 00:07:54.969 6326.745 - 6351.951: 29.3901% ( 403) 00:07:54.969 6351.951 - 6377.157: 31.5420% ( 398) 00:07:54.969 6377.157 - 6402.363: 33.9317% ( 442) 00:07:54.969 6402.363 - 6427.569: 36.2619% ( 431) 00:07:54.969 6427.569 - 6452.775: 38.6354% ( 439) 00:07:54.969 6452.775 - 6503.188: 43.3878% ( 879) 00:07:54.969 6503.188 - 6553.600: 48.2321% ( 896) 00:07:54.969 6553.600 - 6604.012: 53.0655% ( 894) 00:07:54.969 6604.012 - 6654.425: 57.3475% ( 792) 00:07:54.969 6654.425 - 6704.837: 61.2024% ( 713) 00:07:54.969 6704.837 - 6755.249: 64.7221% ( 651) 00:07:54.969 6755.249 - 6805.662: 67.9282% ( 593) 00:07:54.969 6805.662 - 6856.074: 70.9180% ( 553) 00:07:54.969 6856.074 - 6906.486: 73.6808% ( 511) 00:07:54.969 6906.486 - 6956.898: 76.4706% ( 516) 00:07:54.969 6956.898 - 7007.311: 79.0657% ( 480) 00:07:54.969 7007.311 - 7057.723: 81.5041% ( 451) 00:07:54.969 7057.723 - 7108.135: 83.6884% ( 404) 00:07:54.969 7108.135 - 7158.548: 85.4563% ( 327) 00:07:54.969 7158.548 - 7208.960: 86.7269% ( 235) 00:07:54.969 7208.960 - 7259.372: 87.5270% ( 148) 00:07:54.969 7259.372 - 7309.785: 88.0785% ( 102) 00:07:54.969 7309.785 - 7360.197: 88.4624% ( 71) 00:07:54.969 7360.197 - 7410.609: 88.7868% ( 60) 00:07:54.969 7410.609 - 7461.022: 89.0787% ( 54) 00:07:54.969 7461.022 - 7511.434: 89.3328% ( 47) 00:07:54.970 7511.434 - 7561.846: 89.5599% ( 42) 00:07:54.970 7561.846 - 7612.258: 89.7329% ( 32) 00:07:54.970 7612.258 - 7662.671: 89.8789% ( 27) 00:07:54.970 7662.671 - 7713.083: 90.0032% ( 23) 00:07:54.970 7713.083 - 7763.495: 90.1168% ( 21) 00:07:54.970 7763.495 - 7813.908: 90.2357% ( 22) 00:07:54.970 7813.908 - 7864.320: 90.3547% ( 22) 00:07:54.970 7864.320 - 7914.732: 90.4628% ( 20) 00:07:54.970 7914.732 - 7965.145: 90.5655% ( 19) 00:07:54.970 7965.145 - 8015.557: 90.6845% ( 22) 00:07:54.970 8015.557 - 8065.969: 90.7926% ( 20) 00:07:54.970 8065.969 - 8116.382: 90.8953% ( 19) 00:07:54.970 8116.382 - 8166.794: 90.9981% ( 19) 00:07:54.970 8166.794 - 8217.206: 91.1170% ( 22) 00:07:54.970 8217.206 - 8267.618: 91.2684% ( 28) 00:07:54.970 8267.618 - 8318.031: 91.3927% ( 23) 00:07:54.970 8318.031 - 8368.443: 91.5117% ( 22) 00:07:54.970 8368.443 - 8418.855: 91.6090% ( 18) 00:07:54.970 8418.855 - 8469.268: 91.7063% ( 18) 00:07:54.970 8469.268 - 8519.680: 91.8090% ( 19) 00:07:54.970 8519.680 - 8570.092: 91.9010% ( 17) 00:07:54.970 8570.092 - 8620.505: 91.9983% ( 18) 00:07:54.970 8620.505 - 8670.917: 92.1551% ( 29) 00:07:54.970 8670.917 - 8721.329: 92.3119% ( 29) 00:07:54.970 8721.329 - 8771.742: 92.4849% ( 32) 00:07:54.970 8771.742 - 8822.154: 92.6092% ( 23) 00:07:54.970 8822.154 - 8872.566: 92.7606% ( 28) 00:07:54.970 8872.566 - 8922.978: 92.9174% ( 29) 00:07:54.970 8922.978 - 8973.391: 93.0850% ( 31) 00:07:54.970 8973.391 - 9023.803: 93.2526% ( 31) 00:07:54.970 9023.803 - 9074.215: 93.3986% ( 27) 00:07:54.970 9074.215 - 9124.628: 93.5337% ( 25) 00:07:54.970 9124.628 - 9175.040: 93.6689% ( 25) 00:07:54.970 9175.040 - 9225.452: 93.8095% ( 26) 00:07:54.970 9225.452 - 9275.865: 93.9500% ( 26) 00:07:54.970 9275.865 - 9326.277: 94.0798% ( 24) 00:07:54.970 9326.277 - 9376.689: 94.2096% ( 24) 00:07:54.970 9376.689 - 9427.102: 94.3555% ( 27) 00:07:54.970 9427.102 - 9477.514: 94.4961% ( 26) 00:07:54.970 9477.514 - 9527.926: 94.6151% ( 22) 00:07:54.970 9527.926 - 9578.338: 94.7340% ( 22) 00:07:54.970 9578.338 - 9628.751: 94.8421% ( 20) 00:07:54.970 9628.751 - 9679.163: 94.9340% ( 17) 00:07:54.970 9679.163 - 9729.575: 95.0043% ( 13) 00:07:54.970 9729.575 - 9779.988: 95.0800% ( 14) 00:07:54.970 9779.988 - 9830.400: 95.1341% ( 10) 00:07:54.970 9830.400 - 9880.812: 95.1936% ( 11) 00:07:54.970 9880.812 - 9931.225: 95.2530% ( 11) 00:07:54.970 9931.225 - 9981.637: 95.3233% ( 13) 00:07:54.970 9981.637 - 10032.049: 95.3828% ( 11) 00:07:54.970 10032.049 - 10082.462: 95.4423% ( 11) 00:07:54.970 10082.462 - 10132.874: 95.4963% ( 10) 00:07:54.970 10132.874 - 10183.286: 95.5504% ( 10) 00:07:54.970 10183.286 - 10233.698: 95.6045% ( 10) 00:07:54.970 10233.698 - 10284.111: 95.6477% ( 8) 00:07:54.970 10284.111 - 10334.523: 95.6747% ( 5) 00:07:54.970 10334.523 - 10384.935: 95.6856% ( 2) 00:07:54.970 10384.935 - 10435.348: 95.6964% ( 2) 00:07:54.970 10435.348 - 10485.760: 95.7072% ( 2) 00:07:54.970 10485.760 - 10536.172: 95.7342% ( 5) 00:07:54.970 10536.172 - 10586.585: 95.7721% ( 7) 00:07:54.970 10586.585 - 10636.997: 95.8045% ( 6) 00:07:54.970 10636.997 - 10687.409: 95.8315% ( 5) 00:07:54.970 10687.409 - 10737.822: 95.8640% ( 6) 00:07:54.970 10737.822 - 10788.234: 95.9126% ( 9) 00:07:54.970 10788.234 - 10838.646: 95.9613% ( 9) 00:07:54.970 10838.646 - 10889.058: 96.0154% ( 10) 00:07:54.970 10889.058 - 10939.471: 96.0802% ( 12) 00:07:54.970 10939.471 - 10989.883: 96.1397% ( 11) 00:07:54.970 10989.883 - 11040.295: 96.2100% ( 13) 00:07:54.970 11040.295 - 11090.708: 96.2857% ( 14) 00:07:54.970 11090.708 - 11141.120: 96.3506% ( 12) 00:07:54.970 11141.120 - 11191.532: 96.3992% ( 9) 00:07:54.970 11191.532 - 11241.945: 96.4641% ( 12) 00:07:54.970 11241.945 - 11292.357: 96.5344% ( 13) 00:07:54.970 11292.357 - 11342.769: 96.6263% ( 17) 00:07:54.970 11342.769 - 11393.182: 96.7182% ( 17) 00:07:54.970 11393.182 - 11443.594: 96.8101% ( 17) 00:07:54.970 11443.594 - 11494.006: 96.9561% ( 27) 00:07:54.970 11494.006 - 11544.418: 97.0750% ( 22) 00:07:54.970 11544.418 - 11594.831: 97.2048% ( 24) 00:07:54.970 11594.831 - 11645.243: 97.3400% ( 25) 00:07:54.970 11645.243 - 11695.655: 97.4535% ( 21) 00:07:54.970 11695.655 - 11746.068: 97.5508% ( 18) 00:07:54.970 11746.068 - 11796.480: 97.6535% ( 19) 00:07:54.970 11796.480 - 11846.892: 97.7346% ( 15) 00:07:54.970 11846.892 - 11897.305: 97.8320% ( 18) 00:07:54.970 11897.305 - 11947.717: 97.9185% ( 16) 00:07:54.970 11947.717 - 11998.129: 98.0158% ( 18) 00:07:54.970 11998.129 - 12048.542: 98.1077% ( 17) 00:07:54.970 12048.542 - 12098.954: 98.1996% ( 17) 00:07:54.970 12098.954 - 12149.366: 98.2969% ( 18) 00:07:54.970 12149.366 - 12199.778: 98.3942% ( 18) 00:07:54.970 12199.778 - 12250.191: 98.4862% ( 17) 00:07:54.970 12250.191 - 12300.603: 98.5835% ( 18) 00:07:54.970 12300.603 - 12351.015: 98.6267% ( 8) 00:07:54.970 12351.015 - 12401.428: 98.6592% ( 6) 00:07:54.970 12401.428 - 12451.840: 98.7024% ( 8) 00:07:54.970 12451.840 - 12502.252: 98.7457% ( 8) 00:07:54.970 12502.252 - 12552.665: 98.7781% ( 6) 00:07:54.970 12552.665 - 12603.077: 98.8160% ( 7) 00:07:54.970 12603.077 - 12653.489: 98.8538% ( 7) 00:07:54.970 12653.489 - 12703.902: 98.8917% ( 7) 00:07:54.970 12703.902 - 12754.314: 98.9241% ( 6) 00:07:54.970 12754.314 - 12804.726: 98.9619% ( 7) 00:07:54.970 12804.726 - 12855.138: 99.0052% ( 8) 00:07:54.970 12855.138 - 12905.551: 99.0322% ( 5) 00:07:54.970 12905.551 - 13006.375: 99.1079% ( 14) 00:07:54.970 13006.375 - 13107.200: 99.1782% ( 13) 00:07:54.970 13107.200 - 13208.025: 99.2377% ( 11) 00:07:54.970 13208.025 - 13308.849: 99.2701% ( 6) 00:07:54.970 13308.849 - 13409.674: 99.3134% ( 8) 00:07:54.970 13409.674 - 13510.498: 99.3512% ( 7) 00:07:54.970 13510.498 - 13611.323: 99.3891% ( 7) 00:07:54.970 13611.323 - 13712.148: 99.4269% ( 7) 00:07:54.970 13712.148 - 13812.972: 99.4539% ( 5) 00:07:54.970 13812.972 - 13913.797: 99.4756% ( 4) 00:07:54.970 13913.797 - 14014.622: 99.4918% ( 3) 00:07:54.970 14014.622 - 14115.446: 99.5080% ( 3) 00:07:54.970 14115.446 - 14216.271: 99.5242% ( 3) 00:07:54.970 14216.271 - 14317.095: 99.5404% ( 3) 00:07:54.970 14317.095 - 14417.920: 99.5567% ( 3) 00:07:54.970 14417.920 - 14518.745: 99.5729% ( 3) 00:07:54.970 14518.745 - 14619.569: 99.5837% ( 2) 00:07:54.970 14619.569 - 14720.394: 99.5999% ( 3) 00:07:54.970 14720.394 - 14821.218: 99.6215% ( 4) 00:07:54.970 14821.218 - 14922.043: 99.6378% ( 3) 00:07:54.970 14922.043 - 15022.868: 99.6540% ( 3) 00:07:54.970 16535.237 - 16636.062: 99.6756% ( 4) 00:07:54.970 16636.062 - 16736.886: 99.7080% ( 6) 00:07:54.970 16736.886 - 16837.711: 99.7351% ( 5) 00:07:54.970 16837.711 - 16938.535: 99.7675% ( 6) 00:07:54.970 16938.535 - 17039.360: 99.8000% ( 6) 00:07:54.970 17039.360 - 17140.185: 99.8270% ( 5) 00:07:54.970 17140.185 - 17241.009: 99.8594% ( 6) 00:07:54.970 17241.009 - 17341.834: 99.8919% ( 6) 00:07:54.970 17341.834 - 17442.658: 99.9297% ( 7) 00:07:54.970 17442.658 - 17543.483: 99.9622% ( 6) 00:07:54.970 17543.483 - 17644.308: 99.9946% ( 6) 00:07:54.970 17644.308 - 17745.132: 100.0000% ( 1) 00:07:54.970 00:07:54.970 01:07:28 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:55.907 Initializing NVMe Controllers 00:07:55.907 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:55.907 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:55.907 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:55.907 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:55.908 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:55.908 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:55.908 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:55.908 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:55.908 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:55.908 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:55.908 Initialization complete. Launching workers. 00:07:55.908 ======================================================== 00:07:55.908 Latency(us) 00:07:55.908 Device Information : IOPS MiB/s Average min max 00:07:55.908 PCIE (0000:00:13.0) NSID 1 from core 0: 16714.76 195.88 7662.31 5868.81 21154.03 00:07:55.908 PCIE (0000:00:10.0) NSID 1 from core 0: 16714.76 195.88 7654.86 5261.20 19791.95 00:07:55.908 PCIE (0000:00:11.0) NSID 1 from core 0: 16714.76 195.88 7648.25 5018.86 19520.59 00:07:55.908 PCIE (0000:00:12.0) NSID 1 from core 0: 16714.76 195.88 7641.89 4460.15 18920.55 00:07:55.908 PCIE (0000:00:12.0) NSID 2 from core 0: 16714.76 195.88 7635.35 4112.64 18409.56 00:07:55.908 PCIE (0000:00:12.0) NSID 3 from core 0: 16714.76 195.88 7628.97 3581.35 18023.06 00:07:55.908 ======================================================== 00:07:55.908 Total : 100288.58 1175.26 7645.27 3581.35 21154.03 00:07:55.908 00:07:55.908 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:55.908 ================================================================================= 00:07:55.908 1.00000% : 6276.332us 00:07:55.908 10.00000% : 6553.600us 00:07:55.908 25.00000% : 6755.249us 00:07:55.908 50.00000% : 7057.723us 00:07:55.908 75.00000% : 8166.794us 00:07:55.908 90.00000% : 9376.689us 00:07:55.908 95.00000% : 11241.945us 00:07:55.908 98.00000% : 12199.778us 00:07:55.908 99.00000% : 13812.972us 00:07:55.908 99.50000% : 14619.569us 00:07:55.908 99.90000% : 20870.695us 00:07:55.908 99.99000% : 21173.169us 00:07:55.908 99.99900% : 21173.169us 00:07:55.908 99.99990% : 21173.169us 00:07:55.908 99.99999% : 21173.169us 00:07:55.908 00:07:55.908 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:55.908 ================================================================================= 00:07:55.908 1.00000% : 6150.302us 00:07:55.908 10.00000% : 6503.188us 00:07:55.908 25.00000% : 6704.837us 00:07:55.908 50.00000% : 7158.548us 00:07:55.908 75.00000% : 8065.969us 00:07:55.908 90.00000% : 9225.452us 00:07:55.908 95.00000% : 11191.532us 00:07:55.908 98.00000% : 12300.603us 00:07:55.908 99.00000% : 13006.375us 00:07:55.908 99.50000% : 15728.640us 00:07:55.908 99.90000% : 19660.800us 00:07:55.908 99.99000% : 19862.449us 00:07:55.908 99.99900% : 19862.449us 00:07:55.908 99.99990% : 19862.449us 00:07:55.908 99.99999% : 19862.449us 00:07:55.908 00:07:55.908 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:55.908 ================================================================================= 00:07:55.908 1.00000% : 6251.126us 00:07:55.908 10.00000% : 6553.600us 00:07:55.908 25.00000% : 6704.837us 00:07:55.908 50.00000% : 7108.135us 00:07:55.908 75.00000% : 8116.382us 00:07:55.908 90.00000% : 9376.689us 00:07:55.908 95.00000% : 11342.769us 00:07:55.908 98.00000% : 12250.191us 00:07:55.908 99.00000% : 13308.849us 00:07:55.908 99.50000% : 15224.517us 00:07:55.908 99.90000% : 19055.852us 00:07:55.908 99.99000% : 19559.975us 00:07:55.908 99.99900% : 19559.975us 00:07:55.908 99.99990% : 19559.975us 00:07:55.908 99.99999% : 19559.975us 00:07:55.908 00:07:55.908 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:55.908 ================================================================================= 00:07:55.908 1.00000% : 6225.920us 00:07:55.908 10.00000% : 6553.600us 00:07:55.908 25.00000% : 6704.837us 00:07:55.908 50.00000% : 7108.135us 00:07:55.908 75.00000% : 8116.382us 00:07:55.908 90.00000% : 9477.514us 00:07:55.908 95.00000% : 11141.120us 00:07:55.908 98.00000% : 12199.778us 00:07:55.908 99.00000% : 14014.622us 00:07:55.908 99.50000% : 14821.218us 00:07:55.908 99.90000% : 18652.554us 00:07:55.908 99.99000% : 18955.028us 00:07:55.908 99.99900% : 18955.028us 00:07:55.908 99.99990% : 18955.028us 00:07:55.908 99.99999% : 18955.028us 00:07:55.908 00:07:55.908 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:55.908 ================================================================================= 00:07:55.908 1.00000% : 6225.920us 00:07:55.908 10.00000% : 6553.600us 00:07:55.908 25.00000% : 6704.837us 00:07:55.908 50.00000% : 7108.135us 00:07:55.908 75.00000% : 8116.382us 00:07:55.908 90.00000% : 9578.338us 00:07:55.908 95.00000% : 10889.058us 00:07:55.908 98.00000% : 12250.191us 00:07:55.908 99.00000% : 14014.622us 00:07:55.908 99.50000% : 14417.920us 00:07:55.908 99.90000% : 18148.431us 00:07:55.908 99.99000% : 18450.905us 00:07:55.908 99.99900% : 18450.905us 00:07:55.908 99.99990% : 18450.905us 00:07:55.908 99.99999% : 18450.905us 00:07:55.908 00:07:55.908 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:55.908 ================================================================================= 00:07:55.908 1.00000% : 6251.126us 00:07:55.908 10.00000% : 6553.600us 00:07:55.908 25.00000% : 6704.837us 00:07:55.908 50.00000% : 7057.723us 00:07:55.908 75.00000% : 8116.382us 00:07:55.908 90.00000% : 9628.751us 00:07:55.908 95.00000% : 11090.708us 00:07:55.908 98.00000% : 12250.191us 00:07:55.908 99.00000% : 13812.972us 00:07:55.908 99.50000% : 14115.446us 00:07:55.908 99.90000% : 17442.658us 00:07:55.908 99.99000% : 18047.606us 00:07:55.908 99.99900% : 18047.606us 00:07:55.908 99.99990% : 18047.606us 00:07:55.908 99.99999% : 18047.606us 00:07:55.908 00:07:55.908 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:55.908 ============================================================================== 00:07:55.908 Range in us Cumulative IO count 00:07:55.908 5847.828 - 5873.034: 0.0119% ( 2) 00:07:55.908 5873.034 - 5898.240: 0.0477% ( 6) 00:07:55.908 5898.240 - 5923.446: 0.1312% ( 14) 00:07:55.908 5923.446 - 5948.652: 0.1729% ( 7) 00:07:55.908 5948.652 - 5973.858: 0.1968% ( 4) 00:07:55.908 5973.858 - 5999.065: 0.2326% ( 6) 00:07:55.908 5999.065 - 6024.271: 0.2684% ( 6) 00:07:55.908 6024.271 - 6049.477: 0.2863% ( 3) 00:07:55.908 6049.477 - 6074.683: 0.3220% ( 6) 00:07:55.908 6074.683 - 6099.889: 0.3757% ( 9) 00:07:55.908 6099.889 - 6125.095: 0.4115% ( 6) 00:07:55.908 6125.095 - 6150.302: 0.4532% ( 7) 00:07:55.908 6150.302 - 6175.508: 0.5129% ( 10) 00:07:55.908 6175.508 - 6200.714: 0.6202% ( 18) 00:07:55.908 6200.714 - 6225.920: 0.7753% ( 26) 00:07:55.908 6225.920 - 6251.126: 0.9005% ( 21) 00:07:55.908 6251.126 - 6276.332: 1.0973% ( 33) 00:07:55.908 6276.332 - 6301.538: 1.4194% ( 54) 00:07:55.908 6301.538 - 6326.745: 1.8010% ( 64) 00:07:55.908 6326.745 - 6351.951: 2.3438% ( 91) 00:07:55.908 6351.951 - 6377.157: 3.0713% ( 122) 00:07:55.908 6377.157 - 6402.363: 4.0852% ( 170) 00:07:55.908 6402.363 - 6427.569: 5.1885% ( 185) 00:07:55.908 6427.569 - 6452.775: 6.4707% ( 215) 00:07:55.908 6452.775 - 6503.188: 9.3213% ( 478) 00:07:55.908 6503.188 - 6553.600: 13.4184% ( 687) 00:07:55.908 6553.600 - 6604.012: 17.4320% ( 673) 00:07:55.908 6604.012 - 6654.425: 21.0222% ( 602) 00:07:55.908 6654.425 - 6704.837: 24.9404% ( 657) 00:07:55.908 6704.837 - 6755.249: 29.6994% ( 798) 00:07:55.908 6755.249 - 6805.662: 34.2736% ( 767) 00:07:55.908 6805.662 - 6856.074: 37.7743% ( 587) 00:07:55.908 6856.074 - 6906.486: 40.8099% ( 509) 00:07:55.908 6906.486 - 6956.898: 43.9051% ( 519) 00:07:55.908 6956.898 - 7007.311: 46.7975% ( 485) 00:07:55.909 7007.311 - 7057.723: 50.1014% ( 554) 00:07:55.909 7057.723 - 7108.135: 52.1947% ( 351) 00:07:55.909 7108.135 - 7158.548: 53.5126% ( 221) 00:07:55.909 7158.548 - 7208.960: 55.0215% ( 253) 00:07:55.909 7208.960 - 7259.372: 56.0949% ( 180) 00:07:55.909 7259.372 - 7309.785: 57.6813% ( 266) 00:07:55.909 7309.785 - 7360.197: 58.5460% ( 145) 00:07:55.909 7360.197 - 7410.609: 59.3273% ( 131) 00:07:55.909 7410.609 - 7461.022: 60.0310% ( 118) 00:07:55.909 7461.022 - 7511.434: 60.5499% ( 87) 00:07:55.909 7511.434 - 7561.846: 61.2715% ( 121) 00:07:55.909 7561.846 - 7612.258: 61.8738% ( 101) 00:07:55.909 7612.258 - 7662.671: 62.6789% ( 135) 00:07:55.909 7662.671 - 7713.083: 63.5496% ( 146) 00:07:55.909 7713.083 - 7763.495: 64.5813% ( 173) 00:07:55.909 7763.495 - 7813.908: 65.9590% ( 231) 00:07:55.909 7813.908 - 7864.320: 67.3068% ( 226) 00:07:55.909 7864.320 - 7914.732: 68.7440% ( 241) 00:07:55.909 7914.732 - 7965.145: 70.2111% ( 246) 00:07:55.909 7965.145 - 8015.557: 71.6126% ( 235) 00:07:55.909 8015.557 - 8065.969: 73.0677% ( 244) 00:07:55.909 8065.969 - 8116.382: 74.8569% ( 300) 00:07:55.909 8116.382 - 8166.794: 76.6162% ( 295) 00:07:55.909 8166.794 - 8217.206: 78.4948% ( 315) 00:07:55.909 8217.206 - 8267.618: 79.8783% ( 232) 00:07:55.909 8267.618 - 8318.031: 81.1605% ( 215) 00:07:55.909 8318.031 - 8368.443: 82.3950% ( 207) 00:07:55.909 8368.443 - 8418.855: 83.3671% ( 163) 00:07:55.909 8418.855 - 8469.268: 84.1842% ( 137) 00:07:55.909 8469.268 - 8519.680: 85.1384% ( 160) 00:07:55.909 8519.680 - 8570.092: 86.0270% ( 149) 00:07:55.909 8570.092 - 8620.505: 86.7605% ( 123) 00:07:55.909 8620.505 - 8670.917: 87.3986% ( 107) 00:07:55.909 8670.917 - 8721.329: 88.0546% ( 110) 00:07:55.909 8721.329 - 8771.742: 88.4661% ( 69) 00:07:55.909 8771.742 - 8822.154: 88.7703% ( 51) 00:07:55.909 8822.154 - 8872.566: 88.9373% ( 28) 00:07:55.909 8872.566 - 8922.978: 89.0983% ( 27) 00:07:55.909 8922.978 - 8973.391: 89.2593% ( 27) 00:07:55.909 8973.391 - 9023.803: 89.4323% ( 29) 00:07:55.909 9023.803 - 9074.215: 89.5277% ( 16) 00:07:55.909 9074.215 - 9124.628: 89.5933% ( 11) 00:07:55.909 9124.628 - 9175.040: 89.6529% ( 10) 00:07:55.909 9175.040 - 9225.452: 89.7185% ( 11) 00:07:55.909 9225.452 - 9275.865: 89.8318% ( 19) 00:07:55.909 9275.865 - 9326.277: 89.9630% ( 22) 00:07:55.909 9326.277 - 9376.689: 90.1240% ( 27) 00:07:55.909 9376.689 - 9427.102: 90.2254% ( 17) 00:07:55.909 9427.102 - 9477.514: 90.3447% ( 20) 00:07:55.909 9477.514 - 9527.926: 90.4699% ( 21) 00:07:55.909 9527.926 - 9578.338: 90.8158% ( 58) 00:07:55.909 9578.338 - 9628.751: 90.9292% ( 19) 00:07:55.909 9628.751 - 9679.163: 91.0723% ( 24) 00:07:55.909 9679.163 - 9729.575: 91.1975% ( 21) 00:07:55.909 9729.575 - 9779.988: 91.3287% ( 22) 00:07:55.909 9779.988 - 9830.400: 91.4838% ( 26) 00:07:55.909 9830.400 - 9880.812: 91.6567% ( 29) 00:07:55.909 9880.812 - 9931.225: 91.8476% ( 32) 00:07:55.909 9931.225 - 9981.637: 92.0384% ( 32) 00:07:55.909 9981.637 - 10032.049: 92.2054% ( 28) 00:07:55.909 10032.049 - 10082.462: 92.3306% ( 21) 00:07:55.909 10082.462 - 10132.874: 92.5155% ( 31) 00:07:55.909 10132.874 - 10183.286: 92.6407% ( 21) 00:07:55.909 10183.286 - 10233.698: 92.7600% ( 20) 00:07:55.909 10233.698 - 10284.111: 92.8614% ( 17) 00:07:55.909 10284.111 - 10334.523: 93.2013% ( 57) 00:07:55.909 10334.523 - 10384.935: 93.3385% ( 23) 00:07:55.909 10384.935 - 10435.348: 93.4697% ( 22) 00:07:55.909 10435.348 - 10485.760: 93.5949% ( 21) 00:07:55.909 10485.760 - 10536.172: 93.7202% ( 21) 00:07:55.909 10536.172 - 10586.585: 93.8275% ( 18) 00:07:55.909 10586.585 - 10636.997: 93.8991% ( 12) 00:07:55.909 10636.997 - 10687.409: 93.9408% ( 7) 00:07:55.909 10687.409 - 10737.822: 93.9766% ( 6) 00:07:55.909 10737.822 - 10788.234: 94.0243% ( 8) 00:07:55.909 10788.234 - 10838.646: 94.0959% ( 12) 00:07:55.909 10838.646 - 10889.058: 94.1973% ( 17) 00:07:55.909 10889.058 - 10939.471: 94.2867% ( 15) 00:07:55.909 10939.471 - 10989.883: 94.3822% ( 16) 00:07:55.909 10989.883 - 11040.295: 94.4895% ( 18) 00:07:55.909 11040.295 - 11090.708: 94.6386% ( 25) 00:07:55.909 11090.708 - 11141.120: 94.7996% ( 27) 00:07:55.909 11141.120 - 11191.532: 94.9368% ( 23) 00:07:55.909 11191.532 - 11241.945: 95.1455% ( 35) 00:07:55.909 11241.945 - 11292.357: 95.2946% ( 25) 00:07:55.909 11292.357 - 11342.769: 95.4556% ( 27) 00:07:55.909 11342.769 - 11393.182: 95.6226% ( 28) 00:07:55.909 11393.182 - 11443.594: 95.8850% ( 44) 00:07:55.909 11443.594 - 11494.006: 96.1534% ( 45) 00:07:55.909 11494.006 - 11544.418: 96.3800% ( 38) 00:07:55.909 11544.418 - 11594.831: 96.5887% ( 35) 00:07:55.909 11594.831 - 11645.243: 96.7975% ( 35) 00:07:55.909 11645.243 - 11695.655: 96.9764% ( 30) 00:07:55.909 11695.655 - 11746.068: 97.1434% ( 28) 00:07:55.909 11746.068 - 11796.480: 97.2388% ( 16) 00:07:55.909 11796.480 - 11846.892: 97.3342% ( 16) 00:07:55.909 11846.892 - 11897.305: 97.4296% ( 16) 00:07:55.909 11897.305 - 11947.717: 97.5072% ( 13) 00:07:55.909 11947.717 - 11998.129: 97.6026% ( 16) 00:07:55.909 11998.129 - 12048.542: 97.6741% ( 12) 00:07:55.909 12048.542 - 12098.954: 97.7755% ( 17) 00:07:55.909 12098.954 - 12149.366: 97.9783% ( 34) 00:07:55.909 12149.366 - 12199.778: 98.2288% ( 42) 00:07:55.909 12199.778 - 12250.191: 98.3600% ( 22) 00:07:55.909 12250.191 - 12300.603: 98.4792% ( 20) 00:07:55.909 12300.603 - 12351.015: 98.5627% ( 14) 00:07:55.909 12351.015 - 12401.428: 98.6104% ( 8) 00:07:55.909 12401.428 - 12451.840: 98.6462% ( 6) 00:07:55.909 12451.840 - 12502.252: 98.6701% ( 4) 00:07:55.909 12502.252 - 12552.665: 98.6820% ( 2) 00:07:55.909 12552.665 - 12603.077: 98.6999% ( 3) 00:07:55.909 12603.077 - 12653.489: 98.7178% ( 3) 00:07:55.909 12653.489 - 12703.902: 98.7357% ( 3) 00:07:55.909 12703.902 - 12754.314: 98.7536% ( 3) 00:07:55.909 12754.314 - 12804.726: 98.7715% ( 3) 00:07:55.909 12804.726 - 12855.138: 98.7894% ( 3) 00:07:55.909 12855.138 - 12905.551: 98.8073% ( 3) 00:07:55.909 12905.551 - 13006.375: 98.8430% ( 6) 00:07:55.909 13006.375 - 13107.200: 98.8550% ( 2) 00:07:55.909 13611.323 - 13712.148: 98.9086% ( 9) 00:07:55.909 13712.148 - 13812.972: 99.1412% ( 39) 00:07:55.909 13812.972 - 13913.797: 99.1889% ( 8) 00:07:55.909 13913.797 - 14014.622: 99.2665% ( 13) 00:07:55.909 14014.622 - 14115.446: 99.3380% ( 12) 00:07:55.909 14115.446 - 14216.271: 99.4036% ( 11) 00:07:55.909 14216.271 - 14317.095: 99.4454% ( 7) 00:07:55.909 14317.095 - 14417.920: 99.4513% ( 1) 00:07:55.909 14417.920 - 14518.745: 99.4752% ( 4) 00:07:55.909 14518.745 - 14619.569: 99.5110% ( 6) 00:07:55.909 14619.569 - 14720.394: 99.5289% ( 3) 00:07:55.909 14720.394 - 14821.218: 99.5646% ( 6) 00:07:55.909 14821.218 - 14922.043: 99.5885% ( 4) 00:07:55.909 14922.043 - 15022.868: 99.6183% ( 5) 00:07:55.909 19862.449 - 19963.274: 99.6302% ( 2) 00:07:55.909 19963.274 - 20064.098: 99.6481% ( 3) 00:07:55.909 20064.098 - 20164.923: 99.6541% ( 1) 00:07:55.909 20164.923 - 20265.748: 99.6839% ( 5) 00:07:55.909 20265.748 - 20366.572: 99.7316% ( 8) 00:07:55.909 20366.572 - 20467.397: 99.7853% ( 9) 00:07:55.909 20467.397 - 20568.222: 99.8211% ( 6) 00:07:55.909 20568.222 - 20669.046: 99.8569% ( 6) 00:07:55.909 20669.046 - 20769.871: 99.8867% ( 5) 00:07:55.909 20769.871 - 20870.695: 99.9225% ( 6) 00:07:55.909 20870.695 - 20971.520: 99.9523% ( 5) 00:07:55.909 20971.520 - 21072.345: 99.9821% ( 5) 00:07:55.910 21072.345 - 21173.169: 100.0000% ( 3) 00:07:55.910 00:07:55.910 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:55.910 ============================================================================== 00:07:55.910 Range in us Cumulative IO count 00:07:55.910 5242.880 - 5268.086: 0.0119% ( 2) 00:07:55.910 5268.086 - 5293.292: 0.0298% ( 3) 00:07:55.910 5293.292 - 5318.498: 0.0596% ( 5) 00:07:55.910 5318.498 - 5343.705: 0.0775% ( 3) 00:07:55.910 5343.705 - 5368.911: 0.0835% ( 1) 00:07:55.910 5368.911 - 5394.117: 0.0895% ( 1) 00:07:55.910 5394.117 - 5419.323: 0.1014% ( 2) 00:07:55.910 5444.529 - 5469.735: 0.1073% ( 1) 00:07:55.910 5469.735 - 5494.942: 0.1193% ( 2) 00:07:55.910 5494.942 - 5520.148: 0.1252% ( 1) 00:07:55.910 5520.148 - 5545.354: 0.1431% ( 3) 00:07:55.910 5545.354 - 5570.560: 0.1491% ( 1) 00:07:55.910 5570.560 - 5595.766: 0.1551% ( 1) 00:07:55.910 5620.972 - 5646.178: 0.1789% ( 4) 00:07:55.910 5646.178 - 5671.385: 0.1849% ( 1) 00:07:55.910 5671.385 - 5696.591: 0.1908% ( 1) 00:07:55.910 5696.591 - 5721.797: 0.2028% ( 2) 00:07:55.910 5721.797 - 5747.003: 0.2087% ( 1) 00:07:55.910 5747.003 - 5772.209: 0.2147% ( 1) 00:07:55.910 5772.209 - 5797.415: 0.2326% ( 3) 00:07:55.910 5822.622 - 5847.828: 0.2564% ( 4) 00:07:55.910 5847.828 - 5873.034: 0.2624% ( 1) 00:07:55.910 5873.034 - 5898.240: 0.2922% ( 5) 00:07:55.910 5898.240 - 5923.446: 0.3459% ( 9) 00:07:55.910 5923.446 - 5948.652: 0.3996% ( 9) 00:07:55.910 5948.652 - 5973.858: 0.4294% ( 5) 00:07:55.910 5973.858 - 5999.065: 0.4592% ( 5) 00:07:55.910 5999.065 - 6024.271: 0.5308% ( 12) 00:07:55.910 6024.271 - 6049.477: 0.6262% ( 16) 00:07:55.910 6049.477 - 6074.683: 0.7276% ( 17) 00:07:55.910 6074.683 - 6099.889: 0.8349% ( 18) 00:07:55.910 6099.889 - 6125.095: 0.9602% ( 21) 00:07:55.910 6125.095 - 6150.302: 1.1152% ( 26) 00:07:55.910 6150.302 - 6175.508: 1.3240% ( 35) 00:07:55.910 6175.508 - 6200.714: 1.5923% ( 45) 00:07:55.910 6200.714 - 6225.920: 1.8667% ( 46) 00:07:55.910 6225.920 - 6251.126: 2.1827% ( 53) 00:07:55.910 6251.126 - 6276.332: 2.6002% ( 70) 00:07:55.910 6276.332 - 6301.538: 3.1548% ( 93) 00:07:55.910 6301.538 - 6326.745: 3.8705% ( 120) 00:07:55.910 6326.745 - 6351.951: 4.8366% ( 162) 00:07:55.910 6351.951 - 6377.157: 5.7908% ( 160) 00:07:55.910 6377.157 - 6402.363: 6.8404% ( 176) 00:07:55.910 6402.363 - 6427.569: 8.1286% ( 216) 00:07:55.910 6427.569 - 6452.775: 9.7746% ( 276) 00:07:55.910 6452.775 - 6503.188: 13.1322% ( 563) 00:07:55.910 6503.188 - 6553.600: 16.2154% ( 517) 00:07:55.910 6553.600 - 6604.012: 20.0024% ( 635) 00:07:55.910 6604.012 - 6654.425: 23.6582% ( 613) 00:07:55.910 6654.425 - 6704.837: 27.6837% ( 675) 00:07:55.910 6704.837 - 6755.249: 30.9936% ( 555) 00:07:55.910 6755.249 - 6805.662: 34.0052% ( 505) 00:07:55.910 6805.662 - 6856.074: 36.5339% ( 424) 00:07:55.910 6856.074 - 6906.486: 39.3189% ( 467) 00:07:55.910 6906.486 - 6956.898: 41.8833% ( 430) 00:07:55.910 6956.898 - 7007.311: 44.7161% ( 475) 00:07:55.910 7007.311 - 7057.723: 47.5728% ( 479) 00:07:55.910 7057.723 - 7108.135: 49.8986% ( 390) 00:07:55.910 7108.135 - 7158.548: 51.5864% ( 283) 00:07:55.910 7158.548 - 7208.960: 53.0952% ( 253) 00:07:55.910 7208.960 - 7259.372: 54.4549% ( 228) 00:07:55.910 7259.372 - 7309.785: 55.5701% ( 187) 00:07:55.910 7309.785 - 7360.197: 56.9239% ( 227) 00:07:55.910 7360.197 - 7410.609: 58.1167% ( 200) 00:07:55.910 7410.609 - 7461.022: 59.3571% ( 208) 00:07:55.910 7461.022 - 7511.434: 60.6453% ( 216) 00:07:55.910 7511.434 - 7561.846: 61.8201% ( 197) 00:07:55.910 7561.846 - 7612.258: 63.0069% ( 199) 00:07:55.910 7612.258 - 7662.671: 63.9194% ( 153) 00:07:55.910 7662.671 - 7713.083: 64.9272% ( 169) 00:07:55.910 7713.083 - 7763.495: 66.1796% ( 210) 00:07:55.910 7763.495 - 7813.908: 68.2073% ( 340) 00:07:55.910 7813.908 - 7864.320: 70.2171% ( 337) 00:07:55.910 7864.320 - 7914.732: 71.9048% ( 283) 00:07:55.910 7914.732 - 7965.145: 73.1990% ( 217) 00:07:55.910 7965.145 - 8015.557: 74.1710% ( 163) 00:07:55.910 8015.557 - 8065.969: 75.2863% ( 187) 00:07:55.910 8065.969 - 8116.382: 76.2226% ( 157) 00:07:55.910 8116.382 - 8166.794: 77.0813% ( 144) 00:07:55.910 8166.794 - 8217.206: 78.0117% ( 156) 00:07:55.910 8217.206 - 8267.618: 78.9003% ( 149) 00:07:55.910 8267.618 - 8318.031: 79.8426% ( 158) 00:07:55.910 8318.031 - 8368.443: 80.7968% ( 160) 00:07:55.910 8368.443 - 8418.855: 81.7211% ( 155) 00:07:55.910 8418.855 - 8469.268: 82.7469% ( 172) 00:07:55.910 8469.268 - 8519.680: 83.4566% ( 119) 00:07:55.910 8519.680 - 8570.092: 84.1245% ( 112) 00:07:55.910 8570.092 - 8620.505: 84.8938% ( 129) 00:07:55.910 8620.505 - 8670.917: 85.8003% ( 152) 00:07:55.910 8670.917 - 8721.329: 86.5577% ( 127) 00:07:55.910 8721.329 - 8771.742: 87.2555% ( 117) 00:07:55.910 8771.742 - 8822.154: 87.8876% ( 106) 00:07:55.910 8822.154 - 8872.566: 88.4005% ( 86) 00:07:55.910 8872.566 - 8922.978: 88.8001% ( 67) 00:07:55.910 8922.978 - 8973.391: 89.0804% ( 47) 00:07:55.910 8973.391 - 9023.803: 89.3309% ( 42) 00:07:55.910 9023.803 - 9074.215: 89.6112% ( 47) 00:07:55.910 9074.215 - 9124.628: 89.7960% ( 31) 00:07:55.910 9124.628 - 9175.040: 89.9630% ( 28) 00:07:55.910 9175.040 - 9225.452: 90.1062% ( 24) 00:07:55.910 9225.452 - 9275.865: 90.2195% ( 19) 00:07:55.910 9275.865 - 9326.277: 90.3626% ( 24) 00:07:55.910 9326.277 - 9376.689: 90.5952% ( 39) 00:07:55.910 9376.689 - 9427.102: 90.7085% ( 19) 00:07:55.910 9427.102 - 9477.514: 90.8158% ( 18) 00:07:55.910 9477.514 - 9527.926: 90.9411% ( 21) 00:07:55.910 9527.926 - 9578.338: 91.0425% ( 17) 00:07:55.910 9578.338 - 9628.751: 91.1558% ( 19) 00:07:55.910 9628.751 - 9679.163: 91.2333% ( 13) 00:07:55.910 9679.163 - 9729.575: 91.4599% ( 38) 00:07:55.910 9729.575 - 9779.988: 91.6269% ( 28) 00:07:55.910 9779.988 - 9830.400: 91.8118% ( 31) 00:07:55.910 9830.400 - 9880.812: 91.9788% ( 28) 00:07:55.910 9880.812 - 9931.225: 92.1159% ( 23) 00:07:55.910 9931.225 - 9981.637: 92.2292% ( 19) 00:07:55.910 9981.637 - 10032.049: 92.3664% ( 23) 00:07:55.910 10032.049 - 10082.462: 92.5513% ( 31) 00:07:55.910 10082.462 - 10132.874: 92.6825% ( 22) 00:07:55.910 10132.874 - 10183.286: 92.7839% ( 17) 00:07:55.910 10183.286 - 10233.698: 92.9151% ( 22) 00:07:55.910 10233.698 - 10284.111: 93.0165% ( 17) 00:07:55.910 10284.111 - 10334.523: 93.1298% ( 19) 00:07:55.910 10334.523 - 10384.935: 93.2550% ( 21) 00:07:55.910 10384.935 - 10435.348: 93.3743% ( 20) 00:07:55.910 10435.348 - 10485.760: 93.4876% ( 19) 00:07:55.910 10485.760 - 10536.172: 93.6367% ( 25) 00:07:55.910 10536.172 - 10586.585: 93.7917% ( 26) 00:07:55.910 10586.585 - 10636.997: 93.9051% ( 19) 00:07:55.910 10636.997 - 10687.409: 94.0064% ( 17) 00:07:55.910 10687.409 - 10737.822: 94.0840% ( 13) 00:07:55.910 10737.822 - 10788.234: 94.1615% ( 13) 00:07:55.910 10788.234 - 10838.646: 94.2569% ( 16) 00:07:55.910 10838.646 - 10889.058: 94.3404% ( 14) 00:07:55.910 10889.058 - 10939.471: 94.4418% ( 17) 00:07:55.910 10939.471 - 10989.883: 94.5312% ( 15) 00:07:55.910 10989.883 - 11040.295: 94.6744% ( 24) 00:07:55.910 11040.295 - 11090.708: 94.8294% ( 26) 00:07:55.910 11090.708 - 11141.120: 94.9189% ( 15) 00:07:55.910 11141.120 - 11191.532: 95.0024% ( 14) 00:07:55.910 11191.532 - 11241.945: 95.0799% ( 13) 00:07:55.910 11241.945 - 11292.357: 95.1455% ( 11) 00:07:55.910 11292.357 - 11342.769: 95.3006% ( 26) 00:07:55.910 11342.769 - 11393.182: 95.4497% ( 25) 00:07:55.910 11393.182 - 11443.594: 95.5570% ( 18) 00:07:55.910 11443.594 - 11494.006: 95.6644% ( 18) 00:07:55.910 11494.006 - 11544.418: 95.7598% ( 16) 00:07:55.910 11544.418 - 11594.831: 95.8612% ( 17) 00:07:55.910 11594.831 - 11645.243: 95.9387% ( 13) 00:07:55.910 11645.243 - 11695.655: 96.0341% ( 16) 00:07:55.910 11695.655 - 11746.068: 96.1594% ( 21) 00:07:55.910 11746.068 - 11796.480: 96.4039% ( 41) 00:07:55.910 11796.480 - 11846.892: 96.6186% ( 36) 00:07:55.910 11846.892 - 11897.305: 96.8034% ( 31) 00:07:55.910 11897.305 - 11947.717: 97.0301% ( 38) 00:07:55.910 11947.717 - 11998.129: 97.2090% ( 30) 00:07:55.910 11998.129 - 12048.542: 97.3521% ( 24) 00:07:55.910 12048.542 - 12098.954: 97.4714% ( 20) 00:07:55.910 12098.954 - 12149.366: 97.5966% ( 21) 00:07:55.910 12149.366 - 12199.778: 97.7457% ( 25) 00:07:55.910 12199.778 - 12250.191: 97.8709% ( 21) 00:07:55.910 12250.191 - 12300.603: 98.0021% ( 22) 00:07:55.910 12300.603 - 12351.015: 98.0976% ( 16) 00:07:55.910 12351.015 - 12401.428: 98.1751% ( 13) 00:07:55.910 12401.428 - 12451.840: 98.2646% ( 15) 00:07:55.911 12451.840 - 12502.252: 98.3421% ( 13) 00:07:55.911 12502.252 - 12552.665: 98.4315% ( 15) 00:07:55.911 12552.665 - 12603.077: 98.5031% ( 12) 00:07:55.911 12603.077 - 12653.489: 98.5806% ( 13) 00:07:55.911 12653.489 - 12703.902: 98.6462% ( 11) 00:07:55.911 12703.902 - 12754.314: 98.7238% ( 13) 00:07:55.911 12754.314 - 12804.726: 98.8132% ( 15) 00:07:55.911 12804.726 - 12855.138: 98.9027% ( 15) 00:07:55.911 12855.138 - 12905.551: 98.9504% ( 8) 00:07:55.911 12905.551 - 13006.375: 99.0518% ( 17) 00:07:55.911 13006.375 - 13107.200: 99.0935% ( 7) 00:07:55.911 13107.200 - 13208.025: 99.1114% ( 3) 00:07:55.911 13208.025 - 13308.849: 99.1293% ( 3) 00:07:55.911 13308.849 - 13409.674: 99.1412% ( 2) 00:07:55.911 13409.674 - 13510.498: 99.1591% ( 3) 00:07:55.911 13611.323 - 13712.148: 99.1770% ( 3) 00:07:55.911 13712.148 - 13812.972: 99.1949% ( 3) 00:07:55.911 13812.972 - 13913.797: 99.2188% ( 4) 00:07:55.911 13913.797 - 14014.622: 99.2307% ( 2) 00:07:55.911 14014.622 - 14115.446: 99.2366% ( 1) 00:07:55.911 14317.095 - 14417.920: 99.2545% ( 3) 00:07:55.911 14417.920 - 14518.745: 99.2724% ( 3) 00:07:55.911 14518.745 - 14619.569: 99.2844% ( 2) 00:07:55.911 14619.569 - 14720.394: 99.3082% ( 4) 00:07:55.911 14720.394 - 14821.218: 99.3201% ( 2) 00:07:55.911 14821.218 - 14922.043: 99.3321% ( 2) 00:07:55.911 14922.043 - 15022.868: 99.3500% ( 3) 00:07:55.911 15022.868 - 15123.692: 99.3559% ( 1) 00:07:55.911 15123.692 - 15224.517: 99.3738% ( 3) 00:07:55.911 15526.991 - 15627.815: 99.4752% ( 17) 00:07:55.911 15627.815 - 15728.640: 99.5468% ( 12) 00:07:55.911 15728.640 - 15829.465: 99.6183% ( 12) 00:07:55.911 18955.028 - 19055.852: 99.6302% ( 2) 00:07:55.911 19055.852 - 19156.677: 99.6422% ( 2) 00:07:55.911 19156.677 - 19257.502: 99.6541% ( 2) 00:07:55.911 19257.502 - 19358.326: 99.7376% ( 14) 00:07:55.911 19358.326 - 19459.151: 99.7555% ( 3) 00:07:55.911 19459.151 - 19559.975: 99.8092% ( 9) 00:07:55.911 19559.975 - 19660.800: 99.9225% ( 19) 00:07:55.911 19660.800 - 19761.625: 99.9702% ( 8) 00:07:55.911 19761.625 - 19862.449: 100.0000% ( 5) 00:07:55.911 00:07:55.911 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:55.911 ============================================================================== 00:07:55.911 Range in us Cumulative IO count 00:07:55.911 5016.025 - 5041.231: 0.0060% ( 1) 00:07:55.911 5142.055 - 5167.262: 0.0298% ( 4) 00:07:55.911 5167.262 - 5192.468: 0.1014% ( 12) 00:07:55.911 5192.468 - 5217.674: 0.1849% ( 14) 00:07:55.911 5217.674 - 5242.880: 0.2028% ( 3) 00:07:55.911 5242.880 - 5268.086: 0.2147% ( 2) 00:07:55.911 5268.086 - 5293.292: 0.2266% ( 2) 00:07:55.911 5293.292 - 5318.498: 0.2385% ( 2) 00:07:55.911 5318.498 - 5343.705: 0.2445% ( 1) 00:07:55.911 5343.705 - 5368.911: 0.2564% ( 2) 00:07:55.911 5368.911 - 5394.117: 0.2624% ( 1) 00:07:55.911 5394.117 - 5419.323: 0.2684% ( 1) 00:07:55.911 5419.323 - 5444.529: 0.2803% ( 2) 00:07:55.911 5444.529 - 5469.735: 0.2922% ( 2) 00:07:55.911 5469.735 - 5494.942: 0.2982% ( 1) 00:07:55.911 5494.942 - 5520.148: 0.3101% ( 2) 00:07:55.911 5520.148 - 5545.354: 0.3220% ( 2) 00:07:55.911 5545.354 - 5570.560: 0.3280% ( 1) 00:07:55.911 5570.560 - 5595.766: 0.3399% ( 2) 00:07:55.911 5595.766 - 5620.972: 0.3459% ( 1) 00:07:55.911 5620.972 - 5646.178: 0.3578% ( 2) 00:07:55.911 5646.178 - 5671.385: 0.3698% ( 2) 00:07:55.911 5671.385 - 5696.591: 0.3757% ( 1) 00:07:55.911 5696.591 - 5721.797: 0.3817% ( 1) 00:07:55.911 5923.446 - 5948.652: 0.3876% ( 1) 00:07:55.911 6049.477 - 6074.683: 0.3936% ( 1) 00:07:55.911 6074.683 - 6099.889: 0.3996% ( 1) 00:07:55.911 6099.889 - 6125.095: 0.4055% ( 1) 00:07:55.911 6125.095 - 6150.302: 0.4234% ( 3) 00:07:55.911 6150.302 - 6175.508: 0.4771% ( 9) 00:07:55.911 6175.508 - 6200.714: 0.6083% ( 22) 00:07:55.911 6200.714 - 6225.920: 0.9721% ( 61) 00:07:55.911 6225.920 - 6251.126: 1.0854% ( 19) 00:07:55.911 6251.126 - 6276.332: 1.2345% ( 25) 00:07:55.911 6276.332 - 6301.538: 1.4194% ( 31) 00:07:55.911 6301.538 - 6326.745: 1.6639% ( 41) 00:07:55.911 6326.745 - 6351.951: 2.0813% ( 70) 00:07:55.911 6351.951 - 6377.157: 2.8149% ( 123) 00:07:55.911 6377.157 - 6402.363: 3.9241% ( 186) 00:07:55.911 6402.363 - 6427.569: 5.0513% ( 189) 00:07:55.911 6427.569 - 6452.775: 5.9399% ( 149) 00:07:55.911 6452.775 - 6503.188: 8.4864% ( 427) 00:07:55.911 6503.188 - 6553.600: 12.4404% ( 663) 00:07:55.911 6553.600 - 6604.012: 16.9907% ( 763) 00:07:55.911 6604.012 - 6654.425: 21.9764% ( 836) 00:07:55.911 6654.425 - 6704.837: 26.7533% ( 801) 00:07:55.911 6704.837 - 6755.249: 31.2142% ( 748) 00:07:55.911 6755.249 - 6805.662: 35.0668% ( 646) 00:07:55.911 6805.662 - 6856.074: 38.0248% ( 496) 00:07:55.911 6856.074 - 6906.486: 41.3228% ( 553) 00:07:55.911 6906.486 - 6956.898: 44.2271% ( 487) 00:07:55.911 6956.898 - 7007.311: 47.0957% ( 481) 00:07:55.911 7007.311 - 7057.723: 49.6004% ( 420) 00:07:55.911 7057.723 - 7108.135: 51.7832% ( 366) 00:07:55.911 7108.135 - 7158.548: 53.3397% ( 261) 00:07:55.911 7158.548 - 7208.960: 55.1765% ( 308) 00:07:55.911 7208.960 - 7259.372: 56.6078% ( 240) 00:07:55.911 7259.372 - 7309.785: 57.9556% ( 226) 00:07:55.911 7309.785 - 7360.197: 59.0291% ( 180) 00:07:55.911 7360.197 - 7410.609: 59.9714% ( 158) 00:07:55.911 7410.609 - 7461.022: 60.8302% ( 144) 00:07:55.911 7461.022 - 7511.434: 61.9036% ( 180) 00:07:55.911 7511.434 - 7561.846: 62.7803% ( 147) 00:07:55.911 7561.846 - 7612.258: 63.4959% ( 120) 00:07:55.911 7612.258 - 7662.671: 64.2176% ( 121) 00:07:55.911 7662.671 - 7713.083: 64.9034% ( 115) 00:07:55.911 7713.083 - 7763.495: 65.6310% ( 122) 00:07:55.911 7763.495 - 7813.908: 66.5196% ( 149) 00:07:55.911 7813.908 - 7864.320: 67.5751% ( 177) 00:07:55.911 7864.320 - 7914.732: 68.9349% ( 228) 00:07:55.911 7914.732 - 7965.145: 70.3781% ( 242) 00:07:55.911 7965.145 - 8015.557: 72.3879% ( 337) 00:07:55.911 8015.557 - 8065.969: 74.0697% ( 282) 00:07:55.911 8065.969 - 8116.382: 75.6143% ( 259) 00:07:55.911 8116.382 - 8166.794: 77.5942% ( 332) 00:07:55.911 8166.794 - 8217.206: 79.5861% ( 334) 00:07:55.911 8217.206 - 8267.618: 80.8266% ( 208) 00:07:55.911 8267.618 - 8318.031: 81.9358% ( 186) 00:07:55.911 8318.031 - 8368.443: 82.8602% ( 155) 00:07:55.911 8368.443 - 8418.855: 83.6474% ( 132) 00:07:55.911 8418.855 - 8469.268: 84.5062% ( 144) 00:07:55.911 8469.268 - 8519.680: 85.1682% ( 111) 00:07:55.911 8519.680 - 8570.092: 85.7824% ( 103) 00:07:55.911 8570.092 - 8620.505: 86.5756% ( 133) 00:07:55.911 8620.505 - 8670.917: 86.9931% ( 70) 00:07:55.911 8670.917 - 8721.329: 87.3509% ( 60) 00:07:55.911 8721.329 - 8771.742: 87.5656% ( 36) 00:07:55.911 8771.742 - 8822.154: 87.7684% ( 34) 00:07:55.911 8822.154 - 8872.566: 87.9950% ( 38) 00:07:55.911 8872.566 - 8922.978: 88.2335% ( 40) 00:07:55.911 8922.978 - 8973.391: 88.5079% ( 46) 00:07:55.911 8973.391 - 9023.803: 88.8359% ( 55) 00:07:55.911 9023.803 - 9074.215: 89.0923% ( 43) 00:07:55.911 9074.215 - 9124.628: 89.2832% ( 32) 00:07:55.911 9124.628 - 9175.040: 89.3905% ( 18) 00:07:55.911 9175.040 - 9225.452: 89.4859% ( 16) 00:07:55.911 9225.452 - 9275.865: 89.6589% ( 29) 00:07:55.911 9275.865 - 9326.277: 89.8915% ( 39) 00:07:55.911 9326.277 - 9376.689: 90.0942% ( 34) 00:07:55.911 9376.689 - 9427.102: 90.3566% ( 44) 00:07:55.911 9427.102 - 9477.514: 90.7025% ( 58) 00:07:55.911 9477.514 - 9527.926: 90.9828% ( 47) 00:07:55.911 9527.926 - 9578.338: 91.2810% ( 50) 00:07:55.911 9578.338 - 9628.751: 91.4480% ( 28) 00:07:55.911 9628.751 - 9679.163: 91.5911% ( 24) 00:07:55.911 9679.163 - 9729.575: 91.7283% ( 23) 00:07:55.911 9729.575 - 9779.988: 91.8655% ( 23) 00:07:55.911 9779.988 - 9830.400: 91.9728% ( 18) 00:07:55.911 9830.400 - 9880.812: 92.1040% ( 22) 00:07:55.911 9880.812 - 9931.225: 92.2173% ( 19) 00:07:55.911 9931.225 - 9981.637: 92.3962% ( 30) 00:07:55.911 9981.637 - 10032.049: 92.5811% ( 31) 00:07:55.911 10032.049 - 10082.462: 92.9389% ( 60) 00:07:55.911 10082.462 - 10132.874: 93.0642% ( 21) 00:07:55.911 10132.874 - 10183.286: 93.1715% ( 18) 00:07:55.911 10183.286 - 10233.698: 93.2610% ( 15) 00:07:55.911 10233.698 - 10284.111: 93.3683% ( 18) 00:07:55.911 10284.111 - 10334.523: 93.4518% ( 14) 00:07:55.911 10334.523 - 10384.935: 93.5115% ( 10) 00:07:55.911 10384.935 - 10435.348: 93.5830% ( 12) 00:07:55.911 10435.348 - 10485.760: 93.6367% ( 9) 00:07:55.911 10485.760 - 10536.172: 93.6963% ( 10) 00:07:55.911 10536.172 - 10586.585: 93.7858% ( 15) 00:07:55.911 10586.585 - 10636.997: 93.9051% ( 20) 00:07:55.911 10636.997 - 10687.409: 94.0959% ( 32) 00:07:55.911 10687.409 - 10737.822: 94.1973% ( 17) 00:07:55.911 10737.822 - 10788.234: 94.2629% ( 11) 00:07:55.911 10788.234 - 10838.646: 94.3404% ( 13) 00:07:55.911 10838.646 - 10889.058: 94.4120% ( 12) 00:07:55.911 10889.058 - 10939.471: 94.4955% ( 14) 00:07:55.911 10939.471 - 10989.883: 94.5790% ( 14) 00:07:55.911 10989.883 - 11040.295: 94.6565% ( 13) 00:07:55.911 11040.295 - 11090.708: 94.7459% ( 15) 00:07:55.911 11090.708 - 11141.120: 94.7758% ( 5) 00:07:55.911 11141.120 - 11191.532: 94.8414% ( 11) 00:07:55.911 11191.532 - 11241.945: 94.9010% ( 10) 00:07:55.911 11241.945 - 11292.357: 94.9606% ( 10) 00:07:55.911 11292.357 - 11342.769: 95.0143% ( 9) 00:07:55.911 11342.769 - 11393.182: 95.0918% ( 13) 00:07:55.911 11393.182 - 11443.594: 95.1753% ( 14) 00:07:55.911 11443.594 - 11494.006: 95.3960% ( 37) 00:07:55.911 11494.006 - 11544.418: 95.7956% ( 67) 00:07:55.911 11544.418 - 11594.831: 95.9864% ( 32) 00:07:55.911 11594.831 - 11645.243: 96.0997% ( 19) 00:07:55.911 11645.243 - 11695.655: 96.2190% ( 20) 00:07:55.911 11695.655 - 11746.068: 96.3144% ( 16) 00:07:55.911 11746.068 - 11796.480: 96.4337% ( 20) 00:07:55.912 11796.480 - 11846.892: 96.5589% ( 21) 00:07:55.912 11846.892 - 11897.305: 96.7378% ( 30) 00:07:55.912 11897.305 - 11947.717: 96.9048% ( 28) 00:07:55.912 11947.717 - 11998.129: 97.0658% ( 27) 00:07:55.912 11998.129 - 12048.542: 97.2209% ( 26) 00:07:55.912 12048.542 - 12098.954: 97.5549% ( 56) 00:07:55.912 12098.954 - 12149.366: 97.7040% ( 25) 00:07:55.912 12149.366 - 12199.778: 97.8948% ( 32) 00:07:55.912 12199.778 - 12250.191: 98.0439% ( 25) 00:07:55.912 12250.191 - 12300.603: 98.1811% ( 23) 00:07:55.912 12300.603 - 12351.015: 98.2646% ( 14) 00:07:55.912 12351.015 - 12401.428: 98.3421% ( 13) 00:07:55.912 12401.428 - 12451.840: 98.4077% ( 11) 00:07:55.912 12451.840 - 12502.252: 98.4733% ( 11) 00:07:55.912 12502.252 - 12552.665: 98.5329% ( 10) 00:07:55.912 12552.665 - 12603.077: 98.5747% ( 7) 00:07:55.912 12603.077 - 12653.489: 98.6283% ( 9) 00:07:55.912 12653.489 - 12703.902: 98.6760% ( 8) 00:07:55.912 12703.902 - 12754.314: 98.7238% ( 8) 00:07:55.912 12754.314 - 12804.726: 98.7655% ( 7) 00:07:55.912 12804.726 - 12855.138: 98.8073% ( 7) 00:07:55.912 12855.138 - 12905.551: 98.8371% ( 5) 00:07:55.912 12905.551 - 13006.375: 98.8967% ( 10) 00:07:55.912 13006.375 - 13107.200: 98.9325% ( 6) 00:07:55.912 13107.200 - 13208.025: 98.9981% ( 11) 00:07:55.912 13208.025 - 13308.849: 99.0518% ( 9) 00:07:55.912 13308.849 - 13409.674: 99.1174% ( 11) 00:07:55.912 13409.674 - 13510.498: 99.1710% ( 9) 00:07:55.912 13510.498 - 13611.323: 99.1949% ( 4) 00:07:55.912 13611.323 - 13712.148: 99.2128% ( 3) 00:07:55.912 13712.148 - 13812.972: 99.2307% ( 3) 00:07:55.912 13812.972 - 13913.797: 99.2366% ( 1) 00:07:55.912 14821.218 - 14922.043: 99.2605% ( 4) 00:07:55.912 15022.868 - 15123.692: 99.3917% ( 22) 00:07:55.912 15123.692 - 15224.517: 99.6124% ( 37) 00:07:55.912 15224.517 - 15325.342: 99.6183% ( 1) 00:07:55.912 18450.905 - 18551.729: 99.6243% ( 1) 00:07:55.912 18551.729 - 18652.554: 99.6780% ( 9) 00:07:55.912 18652.554 - 18753.378: 99.7734% ( 16) 00:07:55.912 18753.378 - 18854.203: 99.8688% ( 16) 00:07:55.912 18854.203 - 18955.028: 99.8986% ( 5) 00:07:55.912 18955.028 - 19055.852: 99.9404% ( 7) 00:07:55.912 19055.852 - 19156.677: 99.9702% ( 5) 00:07:55.912 19156.677 - 19257.502: 99.9821% ( 2) 00:07:55.912 19358.326 - 19459.151: 99.9881% ( 1) 00:07:55.912 19459.151 - 19559.975: 100.0000% ( 2) 00:07:55.912 00:07:55.912 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:55.912 ============================================================================== 00:07:55.912 Range in us Cumulative IO count 00:07:55.912 4436.283 - 4461.489: 0.0060% ( 1) 00:07:55.912 4461.489 - 4486.695: 0.0537% ( 8) 00:07:55.912 4486.695 - 4511.902: 0.1431% ( 15) 00:07:55.912 4511.902 - 4537.108: 0.1908% ( 8) 00:07:55.912 4537.108 - 4562.314: 0.2028% ( 2) 00:07:55.912 4562.314 - 4587.520: 0.2147% ( 2) 00:07:55.912 4587.520 - 4612.726: 0.2207% ( 1) 00:07:55.912 4612.726 - 4637.932: 0.2326% ( 2) 00:07:55.912 4637.932 - 4663.138: 0.2445% ( 2) 00:07:55.912 4663.138 - 4688.345: 0.2564% ( 2) 00:07:55.912 4688.345 - 4713.551: 0.2624% ( 1) 00:07:55.912 4713.551 - 4738.757: 0.2743% ( 2) 00:07:55.912 4738.757 - 4763.963: 0.2803% ( 1) 00:07:55.912 4763.963 - 4789.169: 0.2922% ( 2) 00:07:55.912 4789.169 - 4814.375: 0.3042% ( 2) 00:07:55.912 4814.375 - 4839.582: 0.3101% ( 1) 00:07:55.912 4839.582 - 4864.788: 0.3220% ( 2) 00:07:55.912 4864.788 - 4889.994: 0.3340% ( 2) 00:07:55.912 4889.994 - 4915.200: 0.3399% ( 1) 00:07:55.912 4915.200 - 4940.406: 0.3519% ( 2) 00:07:55.912 4940.406 - 4965.612: 0.3638% ( 2) 00:07:55.912 4965.612 - 4990.818: 0.3757% ( 2) 00:07:55.912 4990.818 - 5016.025: 0.3817% ( 1) 00:07:55.912 5898.240 - 5923.446: 0.3876% ( 1) 00:07:55.912 5973.858 - 5999.065: 0.4055% ( 3) 00:07:55.912 5999.065 - 6024.271: 0.4234% ( 3) 00:07:55.912 6024.271 - 6049.477: 0.4592% ( 6) 00:07:55.912 6049.477 - 6074.683: 0.4950% ( 6) 00:07:55.912 6074.683 - 6099.889: 0.5487% ( 9) 00:07:55.912 6099.889 - 6125.095: 0.6143% ( 11) 00:07:55.912 6125.095 - 6150.302: 0.6739% ( 10) 00:07:55.912 6150.302 - 6175.508: 0.7932% ( 20) 00:07:55.912 6175.508 - 6200.714: 0.9125% ( 20) 00:07:55.912 6200.714 - 6225.920: 1.1808% ( 45) 00:07:55.912 6225.920 - 6251.126: 1.5327% ( 59) 00:07:55.912 6251.126 - 6276.332: 1.7354% ( 34) 00:07:55.912 6276.332 - 6301.538: 1.9859% ( 42) 00:07:55.912 6301.538 - 6326.745: 2.2901% ( 51) 00:07:55.912 6326.745 - 6351.951: 2.6539% ( 61) 00:07:55.912 6351.951 - 6377.157: 3.2323% ( 97) 00:07:55.912 6377.157 - 6402.363: 4.1806% ( 159) 00:07:55.912 6402.363 - 6427.569: 5.1050% ( 155) 00:07:55.912 6427.569 - 6452.775: 6.1248% ( 171) 00:07:55.912 6452.775 - 6503.188: 9.0589% ( 492) 00:07:55.912 6503.188 - 6553.600: 12.6431% ( 601) 00:07:55.912 6553.600 - 6604.012: 16.3287% ( 618) 00:07:55.912 6604.012 - 6654.425: 20.4795% ( 696) 00:07:55.912 6654.425 - 6704.837: 25.1372% ( 781) 00:07:55.912 6704.837 - 6755.249: 29.9201% ( 802) 00:07:55.912 6755.249 - 6805.662: 33.9218% ( 671) 00:07:55.912 6805.662 - 6856.074: 37.9831% ( 681) 00:07:55.912 6856.074 - 6906.486: 41.3824% ( 570) 00:07:55.912 6906.486 - 6956.898: 44.7102% ( 558) 00:07:55.912 6956.898 - 7007.311: 46.9585% ( 377) 00:07:55.912 7007.311 - 7057.723: 49.3857% ( 407) 00:07:55.912 7057.723 - 7108.135: 51.5744% ( 367) 00:07:55.912 7108.135 - 7158.548: 53.5961% ( 339) 00:07:55.912 7158.548 - 7208.960: 55.1050% ( 253) 00:07:55.912 7208.960 - 7259.372: 56.6019% ( 251) 00:07:55.912 7259.372 - 7309.785: 58.4268% ( 306) 00:07:55.912 7309.785 - 7360.197: 59.7507% ( 222) 00:07:55.912 7360.197 - 7410.609: 61.0568% ( 219) 00:07:55.912 7410.609 - 7461.022: 62.1780% ( 188) 00:07:55.912 7461.022 - 7511.434: 63.1202% ( 158) 00:07:55.912 7511.434 - 7561.846: 64.0446% ( 155) 00:07:55.912 7561.846 - 7612.258: 64.7722% ( 122) 00:07:55.912 7612.258 - 7662.671: 65.6787% ( 152) 00:07:55.912 7662.671 - 7713.083: 66.3228% ( 108) 00:07:55.912 7713.083 - 7763.495: 67.0742% ( 126) 00:07:55.912 7763.495 - 7813.908: 67.7660% ( 116) 00:07:55.912 7813.908 - 7864.320: 68.6546% ( 149) 00:07:55.912 7864.320 - 7914.732: 70.0859% ( 240) 00:07:55.912 7914.732 - 7965.145: 71.4158% ( 223) 00:07:55.912 7965.145 - 8015.557: 72.7099% ( 217) 00:07:55.912 8015.557 - 8065.969: 74.0398% ( 223) 00:07:55.912 8065.969 - 8116.382: 75.4532% ( 237) 00:07:55.912 8116.382 - 8166.794: 77.3438% ( 317) 00:07:55.912 8166.794 - 8217.206: 79.3356% ( 334) 00:07:55.912 8217.206 - 8267.618: 80.8564% ( 255) 00:07:55.912 8267.618 - 8318.031: 81.7688% ( 153) 00:07:55.912 8318.031 - 8368.443: 82.7648% ( 167) 00:07:55.912 8368.443 - 8418.855: 83.5401% ( 130) 00:07:55.912 8418.855 - 8469.268: 84.2975% ( 127) 00:07:55.912 8469.268 - 8519.680: 84.8640% ( 95) 00:07:55.912 8519.680 - 8570.092: 85.4246% ( 94) 00:07:55.912 8570.092 - 8620.505: 86.0448% ( 104) 00:07:55.912 8620.505 - 8670.917: 86.6889% ( 108) 00:07:55.912 8670.917 - 8721.329: 87.1660% ( 80) 00:07:55.912 8721.329 - 8771.742: 87.4523% ( 48) 00:07:55.912 8771.742 - 8822.154: 87.6193% ( 28) 00:07:55.912 8822.154 - 8872.566: 87.7266% ( 18) 00:07:55.912 8872.566 - 8922.978: 87.8638% ( 23) 00:07:55.912 8922.978 - 8973.391: 88.0129% ( 25) 00:07:55.912 8973.391 - 9023.803: 88.1500% ( 23) 00:07:55.912 9023.803 - 9074.215: 88.3111% ( 27) 00:07:55.912 9074.215 - 9124.628: 88.5914% ( 47) 00:07:55.912 9124.628 - 9175.040: 88.8836% ( 49) 00:07:55.912 9175.040 - 9225.452: 89.1400% ( 43) 00:07:55.912 9225.452 - 9275.865: 89.3965% ( 43) 00:07:55.912 9275.865 - 9326.277: 89.5813% ( 31) 00:07:55.912 9326.277 - 9376.689: 89.7662% ( 31) 00:07:55.912 9376.689 - 9427.102: 89.9571% ( 32) 00:07:55.912 9427.102 - 9477.514: 90.1360% ( 30) 00:07:55.912 9477.514 - 9527.926: 90.2612% ( 21) 00:07:55.912 9527.926 - 9578.338: 90.4938% ( 39) 00:07:55.912 9578.338 - 9628.751: 90.7622% ( 45) 00:07:55.912 9628.751 - 9679.163: 90.9948% ( 39) 00:07:55.912 9679.163 - 9729.575: 91.1319% ( 23) 00:07:55.912 9729.575 - 9779.988: 91.2572% ( 21) 00:07:55.912 9779.988 - 9830.400: 91.3943% ( 23) 00:07:55.912 9830.400 - 9880.812: 91.5315% ( 23) 00:07:55.912 9880.812 - 9931.225: 91.7402% ( 35) 00:07:55.912 9931.225 - 9981.637: 91.9728% ( 39) 00:07:55.912 9981.637 - 10032.049: 92.1398% ( 28) 00:07:55.912 10032.049 - 10082.462: 92.2948% ( 26) 00:07:55.912 10082.462 - 10132.874: 92.6050% ( 52) 00:07:55.912 10132.874 - 10183.286: 92.7719% ( 28) 00:07:55.912 10183.286 - 10233.698: 93.0284% ( 43) 00:07:55.912 10233.698 - 10284.111: 93.1894% ( 27) 00:07:55.912 10284.111 - 10334.523: 93.4757% ( 48) 00:07:55.912 10334.523 - 10384.935: 93.6307% ( 26) 00:07:55.912 10384.935 - 10435.348: 93.7560% ( 21) 00:07:55.912 10435.348 - 10485.760: 93.8514% ( 16) 00:07:55.912 10485.760 - 10536.172: 93.9110% ( 10) 00:07:55.912 10536.172 - 10586.585: 93.9766% ( 11) 00:07:55.912 10586.585 - 10636.997: 94.0064% ( 5) 00:07:55.912 10636.997 - 10687.409: 94.0422% ( 6) 00:07:55.912 10687.409 - 10737.822: 94.0720% ( 5) 00:07:55.912 10737.822 - 10788.234: 94.1257% ( 9) 00:07:55.912 10788.234 - 10838.646: 94.2032% ( 13) 00:07:55.912 10838.646 - 10889.058: 94.2688% ( 11) 00:07:55.912 10889.058 - 10939.471: 94.3344% ( 11) 00:07:55.912 10939.471 - 10989.883: 94.4776% ( 24) 00:07:55.912 10989.883 - 11040.295: 94.6147% ( 23) 00:07:55.912 11040.295 - 11090.708: 94.8593% ( 41) 00:07:55.912 11090.708 - 11141.120: 95.0501% ( 32) 00:07:55.912 11141.120 - 11191.532: 95.2052% ( 26) 00:07:55.912 11191.532 - 11241.945: 95.3662% ( 27) 00:07:55.912 11241.945 - 11292.357: 95.5570% ( 32) 00:07:55.912 11292.357 - 11342.769: 95.7359% ( 30) 00:07:55.913 11342.769 - 11393.182: 95.9327% ( 33) 00:07:55.913 11393.182 - 11443.594: 96.0997% ( 28) 00:07:55.913 11443.594 - 11494.006: 96.2906% ( 32) 00:07:55.913 11494.006 - 11544.418: 96.4933% ( 34) 00:07:55.913 11544.418 - 11594.831: 96.6782% ( 31) 00:07:55.913 11594.831 - 11645.243: 96.8571% ( 30) 00:07:55.913 11645.243 - 11695.655: 96.9406% ( 14) 00:07:55.913 11695.655 - 11746.068: 97.0241% ( 14) 00:07:55.913 11746.068 - 11796.480: 97.1195% ( 16) 00:07:55.913 11796.480 - 11846.892: 97.2328% ( 19) 00:07:55.913 11846.892 - 11897.305: 97.3521% ( 20) 00:07:55.913 11897.305 - 11947.717: 97.4594% ( 18) 00:07:55.913 11947.717 - 11998.129: 97.7219% ( 44) 00:07:55.913 11998.129 - 12048.542: 97.8292% ( 18) 00:07:55.913 12048.542 - 12098.954: 97.9067% ( 13) 00:07:55.913 12098.954 - 12149.366: 97.9723% ( 11) 00:07:55.913 12149.366 - 12199.778: 98.0379% ( 11) 00:07:55.913 12199.778 - 12250.191: 98.0976% ( 10) 00:07:55.913 12250.191 - 12300.603: 98.1512% ( 9) 00:07:55.913 12300.603 - 12351.015: 98.1870% ( 6) 00:07:55.913 12351.015 - 12401.428: 98.2228% ( 6) 00:07:55.913 12401.428 - 12451.840: 98.2586% ( 6) 00:07:55.913 12451.840 - 12502.252: 98.3003% ( 7) 00:07:55.913 12502.252 - 12552.665: 98.3480% ( 8) 00:07:55.913 12552.665 - 12603.077: 98.3958% ( 8) 00:07:55.913 12603.077 - 12653.489: 98.4554% ( 10) 00:07:55.913 12653.489 - 12703.902: 98.5031% ( 8) 00:07:55.913 12703.902 - 12754.314: 98.5568% ( 9) 00:07:55.913 12754.314 - 12804.726: 98.5866% ( 5) 00:07:55.913 12804.726 - 12855.138: 98.6164% ( 5) 00:07:55.913 12855.138 - 12905.551: 98.6403% ( 4) 00:07:55.913 12905.551 - 13006.375: 98.6701% ( 5) 00:07:55.913 13006.375 - 13107.200: 98.7118% ( 7) 00:07:55.913 13107.200 - 13208.025: 98.7536% ( 7) 00:07:55.913 13208.025 - 13308.849: 98.7894% ( 6) 00:07:55.913 13308.849 - 13409.674: 98.8251% ( 6) 00:07:55.913 13409.674 - 13510.498: 98.8609% ( 6) 00:07:55.913 13510.498 - 13611.323: 98.8907% ( 5) 00:07:55.913 13611.323 - 13712.148: 98.9086% ( 3) 00:07:55.913 13712.148 - 13812.972: 98.9325% ( 4) 00:07:55.913 13812.972 - 13913.797: 98.9683% ( 6) 00:07:55.913 13913.797 - 14014.622: 99.0637% ( 16) 00:07:55.913 14014.622 - 14115.446: 99.2009% ( 23) 00:07:55.913 14115.446 - 14216.271: 99.2366% ( 6) 00:07:55.913 14417.920 - 14518.745: 99.2605% ( 4) 00:07:55.913 14518.745 - 14619.569: 99.2724% ( 2) 00:07:55.913 14619.569 - 14720.394: 99.3022% ( 5) 00:07:55.913 14720.394 - 14821.218: 99.5706% ( 45) 00:07:55.913 14821.218 - 14922.043: 99.6064% ( 6) 00:07:55.913 14922.043 - 15022.868: 99.6183% ( 2) 00:07:55.913 18148.431 - 18249.255: 99.6302% ( 2) 00:07:55.913 18249.255 - 18350.080: 99.6958% ( 11) 00:07:55.913 18350.080 - 18450.905: 99.7853% ( 15) 00:07:55.913 18450.905 - 18551.729: 99.8867% ( 17) 00:07:55.913 18551.729 - 18652.554: 99.9344% ( 8) 00:07:55.913 18652.554 - 18753.378: 99.9583% ( 4) 00:07:55.913 18753.378 - 18854.203: 99.9821% ( 4) 00:07:55.913 18854.203 - 18955.028: 100.0000% ( 3) 00:07:55.913 00:07:55.913 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:55.913 ============================================================================== 00:07:55.913 Range in us Cumulative IO count 00:07:55.913 4108.603 - 4133.809: 0.0298% ( 5) 00:07:55.913 4133.809 - 4159.015: 0.0895% ( 10) 00:07:55.913 4159.015 - 4184.222: 0.1670% ( 13) 00:07:55.913 4184.222 - 4209.428: 0.1968% ( 5) 00:07:55.913 4209.428 - 4234.634: 0.2087% ( 2) 00:07:55.913 4234.634 - 4259.840: 0.2207% ( 2) 00:07:55.913 4259.840 - 4285.046: 0.2266% ( 1) 00:07:55.913 4285.046 - 4310.252: 0.2385% ( 2) 00:07:55.913 4310.252 - 4335.458: 0.2505% ( 2) 00:07:55.913 4335.458 - 4360.665: 0.2624% ( 2) 00:07:55.913 4360.665 - 4385.871: 0.2684% ( 1) 00:07:55.913 4385.871 - 4411.077: 0.2803% ( 2) 00:07:55.913 4411.077 - 4436.283: 0.2863% ( 1) 00:07:55.913 4436.283 - 4461.489: 0.2982% ( 2) 00:07:55.913 4461.489 - 4486.695: 0.3042% ( 1) 00:07:55.913 4486.695 - 4511.902: 0.3161% ( 2) 00:07:55.913 4511.902 - 4537.108: 0.3280% ( 2) 00:07:55.913 4537.108 - 4562.314: 0.3340% ( 1) 00:07:55.913 4562.314 - 4587.520: 0.3459% ( 2) 00:07:55.913 4587.520 - 4612.726: 0.3578% ( 2) 00:07:55.913 4612.726 - 4637.932: 0.3638% ( 1) 00:07:55.913 4637.932 - 4663.138: 0.3757% ( 2) 00:07:55.913 4663.138 - 4688.345: 0.3817% ( 1) 00:07:55.913 5973.858 - 5999.065: 0.3876% ( 1) 00:07:55.913 5999.065 - 6024.271: 0.3996% ( 2) 00:07:55.913 6024.271 - 6049.477: 0.4055% ( 1) 00:07:55.913 6049.477 - 6074.683: 0.4473% ( 7) 00:07:55.913 6074.683 - 6099.889: 0.4831% ( 6) 00:07:55.913 6099.889 - 6125.095: 0.5129% ( 5) 00:07:55.913 6125.095 - 6150.302: 0.5785% ( 11) 00:07:55.913 6150.302 - 6175.508: 0.6739% ( 16) 00:07:55.913 6175.508 - 6200.714: 0.8767% ( 34) 00:07:55.913 6200.714 - 6225.920: 1.0079% ( 22) 00:07:55.913 6225.920 - 6251.126: 1.2226% ( 36) 00:07:55.913 6251.126 - 6276.332: 1.5029% ( 47) 00:07:55.913 6276.332 - 6301.538: 1.7891% ( 48) 00:07:55.913 6301.538 - 6326.745: 2.1947% ( 68) 00:07:55.913 6326.745 - 6351.951: 2.8328% ( 107) 00:07:55.913 6351.951 - 6377.157: 3.5067% ( 113) 00:07:55.913 6377.157 - 6402.363: 4.4072% ( 151) 00:07:55.913 6402.363 - 6427.569: 5.5761% ( 196) 00:07:55.913 6427.569 - 6452.775: 6.7510% ( 197) 00:07:55.913 6452.775 - 6503.188: 9.2617% ( 421) 00:07:55.913 6503.188 - 6553.600: 13.3051% ( 678) 00:07:55.913 6553.600 - 6604.012: 17.3247% ( 674) 00:07:55.913 6604.012 - 6654.425: 21.4337% ( 689) 00:07:55.913 6654.425 - 6704.837: 25.9482% ( 757) 00:07:55.913 6704.837 - 6755.249: 30.7848% ( 811) 00:07:55.913 6755.249 - 6805.662: 35.0549% ( 716) 00:07:55.913 6805.662 - 6856.074: 38.4363% ( 567) 00:07:55.913 6856.074 - 6906.486: 41.9490% ( 589) 00:07:55.913 6906.486 - 6956.898: 44.7400% ( 468) 00:07:55.913 6956.898 - 7007.311: 46.9704% ( 374) 00:07:55.913 7007.311 - 7057.723: 49.9940% ( 507) 00:07:55.913 7057.723 - 7108.135: 51.8488% ( 311) 00:07:55.913 7108.135 - 7158.548: 53.6617% ( 304) 00:07:55.913 7158.548 - 7208.960: 55.4866% ( 306) 00:07:55.913 7208.960 - 7259.372: 57.1565% ( 280) 00:07:55.913 7259.372 - 7309.785: 58.2359% ( 181) 00:07:55.913 7309.785 - 7360.197: 59.3333% ( 184) 00:07:55.913 7360.197 - 7410.609: 60.6691% ( 224) 00:07:55.913 7410.609 - 7461.022: 61.6472% ( 164) 00:07:55.913 7461.022 - 7511.434: 62.8340% ( 199) 00:07:55.913 7511.434 - 7561.846: 63.8418% ( 169) 00:07:55.913 7561.846 - 7612.258: 65.0525% ( 203) 00:07:55.913 7612.258 - 7662.671: 65.7383% ( 115) 00:07:55.913 7662.671 - 7713.083: 66.1916% ( 76) 00:07:55.913 7713.083 - 7763.495: 66.8356% ( 108) 00:07:55.913 7763.495 - 7813.908: 67.4917% ( 110) 00:07:55.913 7813.908 - 7864.320: 68.3325% ( 141) 00:07:55.913 7864.320 - 7914.732: 69.3583% ( 172) 00:07:55.913 7914.732 - 7965.145: 70.6524% ( 217) 00:07:55.913 7965.145 - 8015.557: 72.3700% ( 288) 00:07:55.913 8015.557 - 8065.969: 74.2247% ( 311) 00:07:55.913 8065.969 - 8116.382: 75.9721% ( 293) 00:07:55.913 8116.382 - 8166.794: 77.5346% ( 262) 00:07:55.913 8166.794 - 8217.206: 79.2164% ( 282) 00:07:55.913 8217.206 - 8267.618: 80.6417% ( 239) 00:07:55.913 8267.618 - 8318.031: 81.7331% ( 183) 00:07:55.913 8318.031 - 8368.443: 82.8125% ( 181) 00:07:55.913 8368.443 - 8418.855: 83.5460% ( 123) 00:07:55.913 8418.855 - 8469.268: 84.4525% ( 152) 00:07:55.913 8469.268 - 8519.680: 85.1503% ( 117) 00:07:55.913 8519.680 - 8570.092: 85.7824% ( 106) 00:07:55.913 8570.092 - 8620.505: 86.4325% ( 109) 00:07:55.913 8620.505 - 8670.917: 87.1064% ( 113) 00:07:55.913 8670.917 - 8721.329: 87.5895% ( 81) 00:07:55.913 8721.329 - 8771.742: 87.9354% ( 58) 00:07:55.913 8771.742 - 8822.154: 88.1262% ( 32) 00:07:55.913 8822.154 - 8872.566: 88.2991% ( 29) 00:07:55.913 8872.566 - 8922.978: 88.5496% ( 42) 00:07:55.913 8922.978 - 8973.391: 88.7524% ( 34) 00:07:55.913 8973.391 - 9023.803: 88.8776% ( 21) 00:07:55.913 9023.803 - 9074.215: 88.9432% ( 11) 00:07:55.914 9074.215 - 9124.628: 89.0088% ( 11) 00:07:55.914 9124.628 - 9175.040: 89.0685% ( 10) 00:07:55.914 9175.040 - 9225.452: 89.1221% ( 9) 00:07:55.914 9225.452 - 9275.865: 89.1997% ( 13) 00:07:55.914 9275.865 - 9326.277: 89.2891% ( 15) 00:07:55.914 9326.277 - 9376.689: 89.3607% ( 12) 00:07:55.914 9376.689 - 9427.102: 89.4501% ( 15) 00:07:55.914 9427.102 - 9477.514: 89.6589% ( 35) 00:07:55.914 9477.514 - 9527.926: 89.8139% ( 26) 00:07:55.914 9527.926 - 9578.338: 90.0286% ( 36) 00:07:55.914 9578.338 - 9628.751: 90.2731% ( 41) 00:07:55.914 9628.751 - 9679.163: 90.4103% ( 23) 00:07:55.914 9679.163 - 9729.575: 90.5892% ( 30) 00:07:55.914 9729.575 - 9779.988: 90.6846% ( 16) 00:07:55.914 9779.988 - 9830.400: 90.8755% ( 32) 00:07:55.914 9830.400 - 9880.812: 91.0365% ( 27) 00:07:55.914 9880.812 - 9931.225: 91.1617% ( 21) 00:07:55.914 9931.225 - 9981.637: 91.2989% ( 23) 00:07:55.914 9981.637 - 10032.049: 91.4420% ( 24) 00:07:55.914 10032.049 - 10082.462: 91.6209% ( 30) 00:07:55.914 10082.462 - 10132.874: 92.0742% ( 76) 00:07:55.914 10132.874 - 10183.286: 92.2650% ( 32) 00:07:55.914 10183.286 - 10233.698: 92.4618% ( 33) 00:07:55.914 10233.698 - 10284.111: 92.6407% ( 30) 00:07:55.914 10284.111 - 10334.523: 92.8256% ( 31) 00:07:55.914 10334.523 - 10384.935: 93.0224% ( 33) 00:07:55.914 10384.935 - 10435.348: 93.1894% ( 28) 00:07:55.914 10435.348 - 10485.760: 93.3385% ( 25) 00:07:55.914 10485.760 - 10536.172: 93.4757% ( 23) 00:07:55.914 10536.172 - 10586.585: 93.6546% ( 30) 00:07:55.914 10586.585 - 10636.997: 93.8335% ( 30) 00:07:55.914 10636.997 - 10687.409: 94.1019% ( 45) 00:07:55.914 10687.409 - 10737.822: 94.4358% ( 56) 00:07:55.914 10737.822 - 10788.234: 94.6505% ( 36) 00:07:55.914 10788.234 - 10838.646: 94.8354% ( 31) 00:07:55.914 10838.646 - 10889.058: 95.0083% ( 29) 00:07:55.914 10889.058 - 10939.471: 95.1515% ( 24) 00:07:55.914 10939.471 - 10989.883: 95.2588% ( 18) 00:07:55.914 10989.883 - 11040.295: 95.3304% ( 12) 00:07:55.914 11040.295 - 11090.708: 95.4437% ( 19) 00:07:55.914 11090.708 - 11141.120: 95.5809% ( 23) 00:07:55.914 11141.120 - 11191.532: 95.7717% ( 32) 00:07:55.914 11191.532 - 11241.945: 95.9089% ( 23) 00:07:55.914 11241.945 - 11292.357: 95.9983% ( 15) 00:07:55.914 11292.357 - 11342.769: 96.1116% ( 19) 00:07:55.914 11342.769 - 11393.182: 96.2130% ( 17) 00:07:55.914 11393.182 - 11443.594: 96.2846% ( 12) 00:07:55.914 11443.594 - 11494.006: 96.3621% ( 13) 00:07:55.914 11494.006 - 11544.418: 96.4277% ( 11) 00:07:55.914 11544.418 - 11594.831: 96.5112% ( 14) 00:07:55.914 11594.831 - 11645.243: 96.5947% ( 14) 00:07:55.914 11645.243 - 11695.655: 96.6961% ( 17) 00:07:55.914 11695.655 - 11746.068: 96.7677% ( 12) 00:07:55.914 11746.068 - 11796.480: 96.8571% ( 15) 00:07:55.914 11796.480 - 11846.892: 96.9346% ( 13) 00:07:55.914 11846.892 - 11897.305: 97.0837% ( 25) 00:07:55.914 11897.305 - 11947.717: 97.2328% ( 25) 00:07:55.914 11947.717 - 11998.129: 97.4058% ( 29) 00:07:55.914 11998.129 - 12048.542: 97.5787% ( 29) 00:07:55.914 12048.542 - 12098.954: 97.7099% ( 22) 00:07:55.914 12098.954 - 12149.366: 97.8948% ( 31) 00:07:55.914 12149.366 - 12199.778: 97.9723% ( 13) 00:07:55.914 12199.778 - 12250.191: 98.0618% ( 15) 00:07:55.914 12250.191 - 12300.603: 98.1393% ( 13) 00:07:55.914 12300.603 - 12351.015: 98.1990% ( 10) 00:07:55.914 12351.015 - 12401.428: 98.2586% ( 10) 00:07:55.914 12401.428 - 12451.840: 98.3003% ( 7) 00:07:55.914 12451.840 - 12502.252: 98.3361% ( 6) 00:07:55.914 12502.252 - 12552.665: 98.3898% ( 9) 00:07:55.914 12552.665 - 12603.077: 98.4614% ( 12) 00:07:55.914 12603.077 - 12653.489: 98.5270% ( 11) 00:07:55.914 12653.489 - 12703.902: 98.5926% ( 11) 00:07:55.914 12703.902 - 12754.314: 98.6104% ( 3) 00:07:55.914 12754.314 - 12804.726: 98.6224% ( 2) 00:07:55.914 12804.726 - 12855.138: 98.6403% ( 3) 00:07:55.914 12855.138 - 12905.551: 98.6522% ( 2) 00:07:55.914 12905.551 - 13006.375: 98.6939% ( 7) 00:07:55.914 13006.375 - 13107.200: 98.7357% ( 7) 00:07:55.914 13107.200 - 13208.025: 98.7715% ( 6) 00:07:55.914 13208.025 - 13308.849: 98.8132% ( 7) 00:07:55.914 13308.849 - 13409.674: 98.8490% ( 6) 00:07:55.914 13409.674 - 13510.498: 98.8550% ( 1) 00:07:55.914 13712.148 - 13812.972: 98.8609% ( 1) 00:07:55.914 13812.972 - 13913.797: 98.9206% ( 10) 00:07:55.914 13913.797 - 14014.622: 99.1710% ( 42) 00:07:55.914 14014.622 - 14115.446: 99.2128% ( 7) 00:07:55.914 14115.446 - 14216.271: 99.2366% ( 4) 00:07:55.914 14216.271 - 14317.095: 99.2724% ( 6) 00:07:55.914 14317.095 - 14417.920: 99.5229% ( 42) 00:07:55.914 14417.920 - 14518.745: 99.5766% ( 9) 00:07:55.914 14518.745 - 14619.569: 99.6124% ( 6) 00:07:55.914 14619.569 - 14720.394: 99.6183% ( 1) 00:07:55.914 17341.834 - 17442.658: 99.6302% ( 2) 00:07:55.914 17442.658 - 17543.483: 99.6660% ( 6) 00:07:55.914 17543.483 - 17644.308: 99.7018% ( 6) 00:07:55.914 17644.308 - 17745.132: 99.7436% ( 7) 00:07:55.914 17745.132 - 17845.957: 99.7793% ( 6) 00:07:55.914 17845.957 - 17946.782: 99.8330% ( 9) 00:07:55.914 17946.782 - 18047.606: 99.8628% ( 5) 00:07:55.914 18047.606 - 18148.431: 99.9046% ( 7) 00:07:55.914 18148.431 - 18249.255: 99.9404% ( 6) 00:07:55.914 18249.255 - 18350.080: 99.9761% ( 6) 00:07:55.914 18350.080 - 18450.905: 100.0000% ( 4) 00:07:55.914 00:07:55.914 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:55.914 ============================================================================== 00:07:55.914 Range in us Cumulative IO count 00:07:55.914 3579.274 - 3604.480: 0.0060% ( 1) 00:07:55.914 3730.511 - 3755.717: 0.0119% ( 1) 00:07:55.914 3755.717 - 3780.923: 0.0298% ( 3) 00:07:55.914 3780.923 - 3806.129: 0.0596% ( 5) 00:07:55.914 3806.129 - 3831.335: 0.1193% ( 10) 00:07:55.914 3831.335 - 3856.542: 0.1491% ( 5) 00:07:55.914 3856.542 - 3881.748: 0.1789% ( 5) 00:07:55.914 3881.748 - 3906.954: 0.1968% ( 3) 00:07:55.914 3906.954 - 3932.160: 0.2087% ( 2) 00:07:55.914 3932.160 - 3957.366: 0.2147% ( 1) 00:07:55.914 3957.366 - 3982.572: 0.2266% ( 2) 00:07:55.914 3982.572 - 4007.778: 0.2385% ( 2) 00:07:55.914 4007.778 - 4032.985: 0.2445% ( 1) 00:07:55.914 4032.985 - 4058.191: 0.2564% ( 2) 00:07:55.914 4058.191 - 4083.397: 0.2684% ( 2) 00:07:55.914 4083.397 - 4108.603: 0.2803% ( 2) 00:07:55.914 4108.603 - 4133.809: 0.2863% ( 1) 00:07:55.914 4133.809 - 4159.015: 0.2982% ( 2) 00:07:55.914 4159.015 - 4184.222: 0.3101% ( 2) 00:07:55.914 4184.222 - 4209.428: 0.3220% ( 2) 00:07:55.914 4209.428 - 4234.634: 0.3280% ( 1) 00:07:55.914 4234.634 - 4259.840: 0.3399% ( 2) 00:07:55.914 4259.840 - 4285.046: 0.3519% ( 2) 00:07:55.914 4285.046 - 4310.252: 0.3578% ( 1) 00:07:55.914 4310.252 - 4335.458: 0.3698% ( 2) 00:07:55.914 4335.458 - 4360.665: 0.3817% ( 2) 00:07:55.914 5923.446 - 5948.652: 0.3876% ( 1) 00:07:55.914 5973.858 - 5999.065: 0.3936% ( 1) 00:07:55.914 5999.065 - 6024.271: 0.4175% ( 4) 00:07:55.914 6024.271 - 6049.477: 0.4294% ( 2) 00:07:55.914 6049.477 - 6074.683: 0.4652% ( 6) 00:07:55.914 6074.683 - 6099.889: 0.4831% ( 3) 00:07:55.914 6099.889 - 6125.095: 0.5188% ( 6) 00:07:55.914 6125.095 - 6150.302: 0.5487% ( 5) 00:07:55.914 6150.302 - 6175.508: 0.5904% ( 7) 00:07:55.914 6175.508 - 6200.714: 0.7216% ( 22) 00:07:55.914 6200.714 - 6225.920: 0.8946% ( 29) 00:07:55.914 6225.920 - 6251.126: 1.0914% ( 33) 00:07:55.914 6251.126 - 6276.332: 1.3657% ( 46) 00:07:55.914 6276.332 - 6301.538: 1.7176% ( 59) 00:07:55.914 6301.538 - 6326.745: 2.1947% ( 80) 00:07:55.914 6326.745 - 6351.951: 2.7552% ( 94) 00:07:55.914 6351.951 - 6377.157: 3.6617% ( 152) 00:07:55.914 6377.157 - 6402.363: 4.8306% ( 196) 00:07:55.914 6402.363 - 6427.569: 5.8683% ( 174) 00:07:55.914 6427.569 - 6452.775: 7.3115% ( 242) 00:07:55.914 6452.775 - 6503.188: 9.8163% ( 420) 00:07:55.914 6503.188 - 6553.600: 13.5138% ( 620) 00:07:55.914 6553.600 - 6604.012: 17.5274% ( 673) 00:07:55.914 6604.012 - 6654.425: 21.4277% ( 654) 00:07:55.914 6654.425 - 6704.837: 25.6500% ( 708) 00:07:55.914 6704.837 - 6755.249: 30.1348% ( 752) 00:07:55.914 6755.249 - 6805.662: 34.0947% ( 664) 00:07:55.914 6805.662 - 6856.074: 38.1560% ( 681) 00:07:55.914 6856.074 - 6906.486: 41.6209% ( 581) 00:07:55.914 6906.486 - 6956.898: 44.7340% ( 522) 00:07:55.914 6956.898 - 7007.311: 48.1453% ( 572) 00:07:55.914 7007.311 - 7057.723: 50.1014% ( 328) 00:07:55.914 7057.723 - 7108.135: 52.4213% ( 389) 00:07:55.914 7108.135 - 7158.548: 54.2581% ( 308) 00:07:55.914 7158.548 - 7208.960: 55.6536% ( 234) 00:07:55.914 7208.960 - 7259.372: 56.6734% ( 171) 00:07:55.914 7259.372 - 7309.785: 58.0033% ( 223) 00:07:55.914 7309.785 - 7360.197: 59.0530% ( 176) 00:07:55.914 7360.197 - 7410.609: 60.3471% ( 217) 00:07:55.914 7410.609 - 7461.022: 61.3132% ( 162) 00:07:55.914 7461.022 - 7511.434: 62.0050% ( 116) 00:07:55.914 7511.434 - 7561.846: 62.7385% ( 123) 00:07:55.914 7561.846 - 7612.258: 63.3349% ( 100) 00:07:55.914 7612.258 - 7662.671: 64.2593% ( 155) 00:07:55.914 7662.671 - 7713.083: 65.0644% ( 135) 00:07:55.914 7713.083 - 7763.495: 65.5177% ( 76) 00:07:55.914 7763.495 - 7813.908: 66.4778% ( 161) 00:07:55.914 7813.908 - 7864.320: 67.5751% ( 184) 00:07:55.914 7864.320 - 7914.732: 68.9408% ( 229) 00:07:55.914 7914.732 - 7965.145: 70.6047% ( 279) 00:07:55.914 7965.145 - 8015.557: 72.3521% ( 293) 00:07:55.914 8015.557 - 8065.969: 73.9027% ( 260) 00:07:55.914 8065.969 - 8116.382: 75.6381% ( 291) 00:07:55.914 8116.382 - 8166.794: 77.3378% ( 285) 00:07:55.914 8166.794 - 8217.206: 79.0375% ( 285) 00:07:55.914 8217.206 - 8267.618: 80.6715% ( 274) 00:07:55.914 8267.618 - 8318.031: 81.6317% ( 161) 00:07:55.914 8318.031 - 8368.443: 82.6336% ( 168) 00:07:55.914 8368.443 - 8418.855: 83.7607% ( 189) 00:07:55.915 8418.855 - 8469.268: 84.6195% ( 144) 00:07:55.915 8469.268 - 8519.680: 85.6691% ( 176) 00:07:55.915 8519.680 - 8570.092: 86.5100% ( 141) 00:07:55.915 8570.092 - 8620.505: 87.2853% ( 130) 00:07:55.915 8620.505 - 8670.917: 87.9532% ( 112) 00:07:55.915 8670.917 - 8721.329: 88.3409% ( 65) 00:07:55.915 8721.329 - 8771.742: 88.5973% ( 43) 00:07:55.915 8771.742 - 8822.154: 88.9730% ( 63) 00:07:55.915 8822.154 - 8872.566: 89.0506% ( 13) 00:07:55.915 8872.566 - 8922.978: 89.1042% ( 9) 00:07:55.915 8922.978 - 8973.391: 89.1460% ( 7) 00:07:55.915 8973.391 - 9023.803: 89.1937% ( 8) 00:07:55.915 9023.803 - 9074.215: 89.2354% ( 7) 00:07:55.915 9074.215 - 9124.628: 89.2712% ( 6) 00:07:55.915 9124.628 - 9175.040: 89.3189% ( 8) 00:07:55.915 9175.040 - 9225.452: 89.3547% ( 6) 00:07:55.915 9225.452 - 9275.865: 89.3965% ( 7) 00:07:55.915 9275.865 - 9326.277: 89.4501% ( 9) 00:07:55.915 9326.277 - 9376.689: 89.4859% ( 6) 00:07:55.915 9376.689 - 9427.102: 89.5515% ( 11) 00:07:55.915 9427.102 - 9477.514: 89.6291% ( 13) 00:07:55.915 9477.514 - 9527.926: 89.7304% ( 17) 00:07:55.915 9527.926 - 9578.338: 89.9392% ( 35) 00:07:55.915 9578.338 - 9628.751: 90.2135% ( 46) 00:07:55.915 9628.751 - 9679.163: 90.5355% ( 54) 00:07:55.915 9679.163 - 9729.575: 90.7860% ( 42) 00:07:55.915 9729.575 - 9779.988: 90.9411% ( 26) 00:07:55.915 9779.988 - 9830.400: 91.1021% ( 27) 00:07:55.915 9830.400 - 9880.812: 91.2512% ( 25) 00:07:55.915 9880.812 - 9931.225: 91.4361% ( 31) 00:07:55.915 9931.225 - 9981.637: 91.6150% ( 30) 00:07:55.915 9981.637 - 10032.049: 91.7939% ( 30) 00:07:55.915 10032.049 - 10082.462: 92.3247% ( 89) 00:07:55.915 10082.462 - 10132.874: 92.5513% ( 38) 00:07:55.915 10132.874 - 10183.286: 92.7481% ( 33) 00:07:55.915 10183.286 - 10233.698: 92.9210% ( 29) 00:07:55.915 10233.698 - 10284.111: 93.0821% ( 27) 00:07:55.915 10284.111 - 10334.523: 93.2312% ( 25) 00:07:55.915 10334.523 - 10384.935: 93.3445% ( 19) 00:07:55.915 10384.935 - 10435.348: 93.4220% ( 13) 00:07:55.915 10435.348 - 10485.760: 93.4637% ( 7) 00:07:55.915 10485.760 - 10536.172: 93.5174% ( 9) 00:07:55.915 10536.172 - 10586.585: 93.6248% ( 18) 00:07:55.915 10586.585 - 10636.997: 93.7381% ( 19) 00:07:55.915 10636.997 - 10687.409: 93.8633% ( 21) 00:07:55.915 10687.409 - 10737.822: 94.0482% ( 31) 00:07:55.915 10737.822 - 10788.234: 94.2331% ( 31) 00:07:55.915 10788.234 - 10838.646: 94.3583% ( 21) 00:07:55.915 10838.646 - 10889.058: 94.5551% ( 33) 00:07:55.915 10889.058 - 10939.471: 94.6744% ( 20) 00:07:55.915 10939.471 - 10989.883: 94.7937% ( 20) 00:07:55.915 10989.883 - 11040.295: 94.9129% ( 20) 00:07:55.915 11040.295 - 11090.708: 95.0441% ( 22) 00:07:55.915 11090.708 - 11141.120: 95.3364% ( 49) 00:07:55.915 11141.120 - 11191.532: 95.4616% ( 21) 00:07:55.915 11191.532 - 11241.945: 95.6405% ( 30) 00:07:55.915 11241.945 - 11292.357: 95.8552% ( 36) 00:07:55.915 11292.357 - 11342.769: 95.9864% ( 22) 00:07:55.915 11342.769 - 11393.182: 96.1295% ( 24) 00:07:55.915 11393.182 - 11443.594: 96.2369% ( 18) 00:07:55.915 11443.594 - 11494.006: 96.3204% ( 14) 00:07:55.915 11494.006 - 11544.418: 96.4337% ( 19) 00:07:55.915 11544.418 - 11594.831: 96.5768% ( 24) 00:07:55.915 11594.831 - 11645.243: 96.7378% ( 27) 00:07:55.915 11645.243 - 11695.655: 96.8452% ( 18) 00:07:55.915 11695.655 - 11746.068: 96.9645% ( 20) 00:07:55.915 11746.068 - 11796.480: 97.1016% ( 23) 00:07:55.915 11796.480 - 11846.892: 97.2209% ( 20) 00:07:55.915 11846.892 - 11897.305: 97.3223% ( 17) 00:07:55.915 11897.305 - 11947.717: 97.4058% ( 14) 00:07:55.915 11947.717 - 11998.129: 97.4952% ( 15) 00:07:55.915 11998.129 - 12048.542: 97.5787% ( 14) 00:07:55.915 12048.542 - 12098.954: 97.6741% ( 16) 00:07:55.915 12098.954 - 12149.366: 97.8053% ( 22) 00:07:55.915 12149.366 - 12199.778: 97.9246% ( 20) 00:07:55.915 12199.778 - 12250.191: 98.1095% ( 31) 00:07:55.915 12250.191 - 12300.603: 98.2467% ( 23) 00:07:55.915 12300.603 - 12351.015: 98.3182% ( 12) 00:07:55.915 12351.015 - 12401.428: 98.3898% ( 12) 00:07:55.915 12401.428 - 12451.840: 98.4554% ( 11) 00:07:55.915 12451.840 - 12502.252: 98.5091% ( 9) 00:07:55.915 12502.252 - 12552.665: 98.5389% ( 5) 00:07:55.915 12552.665 - 12603.077: 98.5568% ( 3) 00:07:55.915 12603.077 - 12653.489: 98.5866% ( 5) 00:07:55.915 12653.489 - 12703.902: 98.6164% ( 5) 00:07:55.915 12703.902 - 12754.314: 98.6462% ( 5) 00:07:55.915 12754.314 - 12804.726: 98.7357% ( 15) 00:07:55.915 12804.726 - 12855.138: 98.7595% ( 4) 00:07:55.915 12855.138 - 12905.551: 98.7834% ( 4) 00:07:55.915 12905.551 - 13006.375: 98.8251% ( 7) 00:07:55.915 13006.375 - 13107.200: 98.8550% ( 5) 00:07:55.915 13409.674 - 13510.498: 98.8729% ( 3) 00:07:55.915 13510.498 - 13611.323: 98.9206% ( 8) 00:07:55.915 13611.323 - 13712.148: 98.9862% ( 11) 00:07:55.915 13712.148 - 13812.972: 99.0935% ( 18) 00:07:55.915 13812.972 - 13913.797: 99.3857% ( 49) 00:07:55.915 13913.797 - 14014.622: 99.4573% ( 12) 00:07:55.915 14014.622 - 14115.446: 99.5229% ( 11) 00:07:55.915 14115.446 - 14216.271: 99.5468% ( 4) 00:07:55.915 14216.271 - 14317.095: 99.5766% ( 5) 00:07:55.915 14317.095 - 14417.920: 99.6004% ( 4) 00:07:55.915 14417.920 - 14518.745: 99.6183% ( 3) 00:07:55.915 17241.009 - 17341.834: 99.6601% ( 7) 00:07:55.915 17341.834 - 17442.658: 99.9344% ( 46) 00:07:55.915 17745.132 - 17845.957: 99.9404% ( 1) 00:07:55.915 17845.957 - 17946.782: 99.9761% ( 6) 00:07:55.915 17946.782 - 18047.606: 100.0000% ( 4) 00:07:55.915 00:07:55.915 ************************************ 00:07:55.915 END TEST nvme_perf 00:07:55.915 ************************************ 00:07:55.915 01:07:29 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:55.915 00:07:55.915 real 0m2.408s 00:07:55.915 user 0m2.196s 00:07:55.915 sys 0m0.126s 00:07:55.915 01:07:29 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.915 01:07:29 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:55.915 01:07:29 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:55.915 01:07:29 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:55.915 01:07:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.915 01:07:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.174 ************************************ 00:07:56.174 START TEST nvme_hello_world 00:07:56.174 ************************************ 00:07:56.174 01:07:29 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:56.174 Initializing NVMe Controllers 00:07:56.174 Attached to 0000:00:13.0 00:07:56.174 Namespace ID: 1 size: 1GB 00:07:56.174 Attached to 0000:00:10.0 00:07:56.174 Namespace ID: 1 size: 6GB 00:07:56.174 Attached to 0000:00:11.0 00:07:56.174 Namespace ID: 1 size: 5GB 00:07:56.174 Attached to 0000:00:12.0 00:07:56.174 Namespace ID: 1 size: 4GB 00:07:56.174 Namespace ID: 2 size: 4GB 00:07:56.174 Namespace ID: 3 size: 4GB 00:07:56.174 Initialization complete. 00:07:56.174 INFO: using host memory buffer for IO 00:07:56.174 Hello world! 00:07:56.174 INFO: using host memory buffer for IO 00:07:56.174 Hello world! 00:07:56.174 INFO: using host memory buffer for IO 00:07:56.174 Hello world! 00:07:56.174 INFO: using host memory buffer for IO 00:07:56.174 Hello world! 00:07:56.174 INFO: using host memory buffer for IO 00:07:56.174 Hello world! 00:07:56.174 INFO: using host memory buffer for IO 00:07:56.174 Hello world! 00:07:56.174 ************************************ 00:07:56.174 END TEST nvme_hello_world 00:07:56.174 ************************************ 00:07:56.174 00:07:56.174 real 0m0.191s 00:07:56.174 user 0m0.070s 00:07:56.174 sys 0m0.075s 00:07:56.174 01:07:29 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.174 01:07:29 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:56.174 01:07:29 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:56.174 01:07:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.174 01:07:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.174 01:07:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.174 ************************************ 00:07:56.174 START TEST nvme_sgl 00:07:56.174 ************************************ 00:07:56.174 01:07:29 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:56.433 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:56.433 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:56.433 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:56.433 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:56.433 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:56.433 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:56.433 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:56.433 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:56.433 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:56.433 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:56.433 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:56.433 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:56.433 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:56.433 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:56.433 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:56.433 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:56.433 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:56.433 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:56.433 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:56.433 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:56.433 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:56.433 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:56.433 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:56.433 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:56.433 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:56.433 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:56.433 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:56.433 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:56.433 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:56.433 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:56.433 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:56.433 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:56.433 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:56.433 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:56.433 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:56.433 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:56.433 NVMe Readv/Writev Request test 00:07:56.433 Attached to 0000:00:13.0 00:07:56.433 Attached to 0000:00:10.0 00:07:56.433 Attached to 0000:00:11.0 00:07:56.433 Attached to 0000:00:12.0 00:07:56.433 0000:00:10.0: build_io_request_2 test passed 00:07:56.433 0000:00:10.0: build_io_request_4 test passed 00:07:56.433 0000:00:10.0: build_io_request_5 test passed 00:07:56.433 0000:00:10.0: build_io_request_6 test passed 00:07:56.433 0000:00:10.0: build_io_request_7 test passed 00:07:56.433 0000:00:10.0: build_io_request_10 test passed 00:07:56.433 0000:00:11.0: build_io_request_2 test passed 00:07:56.433 0000:00:11.0: build_io_request_4 test passed 00:07:56.433 0000:00:11.0: build_io_request_5 test passed 00:07:56.433 0000:00:11.0: build_io_request_6 test passed 00:07:56.433 0000:00:11.0: build_io_request_7 test passed 00:07:56.433 0000:00:11.0: build_io_request_10 test passed 00:07:56.433 Cleaning up... 00:07:56.433 ************************************ 00:07:56.433 END TEST nvme_sgl 00:07:56.433 ************************************ 00:07:56.433 00:07:56.433 real 0m0.224s 00:07:56.433 user 0m0.123s 00:07:56.433 sys 0m0.067s 00:07:56.433 01:07:29 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.433 01:07:29 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:56.433 01:07:30 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:56.433 01:07:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.433 01:07:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.433 01:07:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.433 ************************************ 00:07:56.433 START TEST nvme_e2edp 00:07:56.433 ************************************ 00:07:56.433 01:07:30 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:56.691 NVMe Write/Read with End-to-End data protection test 00:07:56.691 Attached to 0000:00:13.0 00:07:56.691 Attached to 0000:00:10.0 00:07:56.691 Attached to 0000:00:11.0 00:07:56.691 Attached to 0000:00:12.0 00:07:56.691 Cleaning up... 00:07:56.691 ************************************ 00:07:56.691 END TEST nvme_e2edp 00:07:56.691 ************************************ 00:07:56.691 00:07:56.691 real 0m0.165s 00:07:56.691 user 0m0.068s 00:07:56.691 sys 0m0.063s 00:07:56.691 01:07:30 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.691 01:07:30 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:56.691 01:07:30 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:56.691 01:07:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.691 01:07:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.691 01:07:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.691 ************************************ 00:07:56.691 START TEST nvme_reserve 00:07:56.691 ************************************ 00:07:56.691 01:07:30 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:56.950 ===================================================== 00:07:56.950 NVMe Controller at PCI bus 0, device 19, function 0 00:07:56.950 ===================================================== 00:07:56.950 Reservations: Not Supported 00:07:56.950 ===================================================== 00:07:56.950 NVMe Controller at PCI bus 0, device 16, function 0 00:07:56.950 ===================================================== 00:07:56.950 Reservations: Not Supported 00:07:56.950 ===================================================== 00:07:56.950 NVMe Controller at PCI bus 0, device 17, function 0 00:07:56.950 ===================================================== 00:07:56.950 Reservations: Not Supported 00:07:56.950 ===================================================== 00:07:56.950 NVMe Controller at PCI bus 0, device 18, function 0 00:07:56.950 ===================================================== 00:07:56.950 Reservations: Not Supported 00:07:56.950 Reservation test passed 00:07:56.950 ************************************ 00:07:56.950 END TEST nvme_reserve 00:07:56.950 ************************************ 00:07:56.950 00:07:56.950 real 0m0.167s 00:07:56.950 user 0m0.057s 00:07:56.950 sys 0m0.075s 00:07:56.950 01:07:30 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.950 01:07:30 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:56.950 01:07:30 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:56.950 01:07:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.950 01:07:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.950 01:07:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.950 ************************************ 00:07:56.950 START TEST nvme_err_injection 00:07:56.950 ************************************ 00:07:56.950 01:07:30 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:57.209 NVMe Error Injection test 00:07:57.209 Attached to 0000:00:13.0 00:07:57.209 Attached to 0000:00:10.0 00:07:57.209 Attached to 0000:00:11.0 00:07:57.209 Attached to 0000:00:12.0 00:07:57.209 0000:00:12.0: get features failed as expected 00:07:57.209 0000:00:13.0: get features failed as expected 00:07:57.209 0000:00:10.0: get features failed as expected 00:07:57.209 0000:00:11.0: get features failed as expected 00:07:57.209 0000:00:13.0: get features successfully as expected 00:07:57.209 0000:00:10.0: get features successfully as expected 00:07:57.209 0000:00:11.0: get features successfully as expected 00:07:57.209 0000:00:12.0: get features successfully as expected 00:07:57.209 0000:00:12.0: read failed as expected 00:07:57.209 0000:00:13.0: read failed as expected 00:07:57.209 0000:00:10.0: read failed as expected 00:07:57.209 0000:00:11.0: read failed as expected 00:07:57.209 0000:00:13.0: read successfully as expected 00:07:57.209 0000:00:10.0: read successfully as expected 00:07:57.209 0000:00:11.0: read successfully as expected 00:07:57.209 0000:00:12.0: read successfully as expected 00:07:57.209 Cleaning up... 00:07:57.209 ************************************ 00:07:57.209 END TEST nvme_err_injection 00:07:57.209 ************************************ 00:07:57.209 00:07:57.209 real 0m0.192s 00:07:57.209 user 0m0.063s 00:07:57.209 sys 0m0.083s 00:07:57.209 01:07:30 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.209 01:07:30 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:57.209 01:07:30 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:57.209 01:07:30 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:57.209 01:07:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.209 01:07:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.209 ************************************ 00:07:57.209 START TEST nvme_overhead 00:07:57.209 ************************************ 00:07:57.209 01:07:30 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:58.584 Initializing NVMe Controllers 00:07:58.584 Attached to 0000:00:13.0 00:07:58.584 Attached to 0000:00:10.0 00:07:58.584 Attached to 0000:00:11.0 00:07:58.584 Attached to 0000:00:12.0 00:07:58.584 Initialization complete. Launching workers. 00:07:58.584 submit (in ns) avg, min, max = 11312.2, 9865.4, 320556.2 00:07:58.584 complete (in ns) avg, min, max = 7643.6, 7273.1, 64273.8 00:07:58.584 00:07:58.584 Submit histogram 00:07:58.584 ================ 00:07:58.584 Range in us Cumulative Count 00:07:58.584 9.846 - 9.895: 0.0055% ( 1) 00:07:58.584 10.240 - 10.289: 0.0109% ( 1) 00:07:58.584 10.486 - 10.535: 0.0164% ( 1) 00:07:58.584 10.732 - 10.782: 0.0875% ( 13) 00:07:58.584 10.782 - 10.831: 0.6235% ( 98) 00:07:58.584 10.831 - 10.880: 2.6637% ( 373) 00:07:58.584 10.880 - 10.929: 8.3630% ( 1042) 00:07:58.584 10.929 - 10.978: 18.1261% ( 1785) 00:07:58.584 10.978 - 11.028: 31.9532% ( 2528) 00:07:58.584 11.028 - 11.077: 47.1968% ( 2787) 00:07:58.584 11.077 - 11.126: 61.3630% ( 2590) 00:07:58.584 11.126 - 11.175: 72.0888% ( 1961) 00:07:58.584 11.175 - 11.225: 79.4891% ( 1353) 00:07:58.584 11.225 - 11.274: 83.8976% ( 806) 00:07:58.584 11.274 - 11.323: 86.5941% ( 493) 00:07:58.584 11.323 - 11.372: 88.4100% ( 332) 00:07:58.584 11.372 - 11.422: 89.7063% ( 237) 00:07:58.584 11.422 - 11.471: 90.5924% ( 162) 00:07:58.584 11.471 - 11.520: 91.3362% ( 136) 00:07:58.584 11.520 - 11.569: 91.9871% ( 119) 00:07:58.584 11.569 - 11.618: 92.5505% ( 103) 00:07:58.584 11.618 - 11.668: 93.0591% ( 93) 00:07:58.584 11.668 - 11.717: 93.5678% ( 93) 00:07:58.584 11.717 - 11.766: 93.9178% ( 64) 00:07:58.584 11.766 - 11.815: 94.3773% ( 84) 00:07:58.584 11.815 - 11.865: 94.7109% ( 61) 00:07:58.585 11.865 - 11.914: 95.0883% ( 69) 00:07:58.585 11.914 - 11.963: 95.3837% ( 54) 00:07:58.585 11.963 - 12.012: 95.6134% ( 42) 00:07:58.585 12.012 - 12.062: 95.8267% ( 39) 00:07:58.585 12.062 - 12.111: 95.9963% ( 31) 00:07:58.585 12.111 - 12.160: 96.1494% ( 28) 00:07:58.585 12.160 - 12.209: 96.2916% ( 26) 00:07:58.585 12.209 - 12.258: 96.3901% ( 18) 00:07:58.585 12.258 - 12.308: 96.4885% ( 18) 00:07:58.585 12.308 - 12.357: 96.5651% ( 14) 00:07:58.585 12.357 - 12.406: 96.5979% ( 6) 00:07:58.585 12.406 - 12.455: 96.6307% ( 6) 00:07:58.585 12.455 - 12.505: 96.6581% ( 5) 00:07:58.585 12.505 - 12.554: 96.6636% ( 1) 00:07:58.585 12.554 - 12.603: 96.6745% ( 2) 00:07:58.585 12.603 - 12.702: 96.6800% ( 1) 00:07:58.585 12.702 - 12.800: 96.6909% ( 2) 00:07:58.585 12.800 - 12.898: 96.7292% ( 7) 00:07:58.585 12.898 - 12.997: 96.8167% ( 16) 00:07:58.585 12.997 - 13.095: 96.9480% ( 24) 00:07:58.585 13.095 - 13.194: 97.0902% ( 26) 00:07:58.585 13.194 - 13.292: 97.1996% ( 20) 00:07:58.585 13.292 - 13.391: 97.3527% ( 28) 00:07:58.585 13.391 - 13.489: 97.5168% ( 30) 00:07:58.585 13.489 - 13.588: 97.6536% ( 25) 00:07:58.585 13.588 - 13.686: 97.7411% ( 16) 00:07:58.585 13.686 - 13.785: 97.8122% ( 13) 00:07:58.585 13.785 - 13.883: 97.8505% ( 7) 00:07:58.585 13.883 - 13.982: 97.9161% ( 12) 00:07:58.585 13.982 - 14.080: 97.9216% ( 1) 00:07:58.585 14.080 - 14.178: 97.9325% ( 2) 00:07:58.585 14.178 - 14.277: 97.9489% ( 3) 00:07:58.585 14.277 - 14.375: 97.9544% ( 1) 00:07:58.585 14.375 - 14.474: 97.9599% ( 1) 00:07:58.585 14.474 - 14.572: 97.9763% ( 3) 00:07:58.585 14.572 - 14.671: 98.0145% ( 7) 00:07:58.585 14.671 - 14.769: 98.0528% ( 7) 00:07:58.585 14.769 - 14.868: 98.0966% ( 8) 00:07:58.585 14.868 - 14.966: 98.1403% ( 8) 00:07:58.585 14.966 - 15.065: 98.1732% ( 6) 00:07:58.585 15.065 - 15.163: 98.2005% ( 5) 00:07:58.585 15.163 - 15.262: 98.2333% ( 6) 00:07:58.585 15.262 - 15.360: 98.2880% ( 10) 00:07:58.585 15.360 - 15.458: 98.3318% ( 8) 00:07:58.585 15.458 - 15.557: 98.3591% ( 5) 00:07:58.585 15.557 - 15.655: 98.4029% ( 8) 00:07:58.585 15.655 - 15.754: 98.4138% ( 2) 00:07:58.585 15.754 - 15.852: 98.4248% ( 2) 00:07:58.585 15.852 - 15.951: 98.4412% ( 3) 00:07:58.585 15.951 - 16.049: 98.4521% ( 2) 00:07:58.585 16.049 - 16.148: 98.4576% ( 1) 00:07:58.585 16.148 - 16.246: 98.4740% ( 3) 00:07:58.585 16.246 - 16.345: 98.4849% ( 2) 00:07:58.585 16.345 - 16.443: 98.4959% ( 2) 00:07:58.585 16.443 - 16.542: 98.5123% ( 3) 00:07:58.585 16.542 - 16.640: 98.5506% ( 7) 00:07:58.585 16.640 - 16.738: 98.6217% ( 13) 00:07:58.585 16.738 - 16.837: 98.6709% ( 9) 00:07:58.585 16.837 - 16.935: 98.7748% ( 19) 00:07:58.585 16.935 - 17.034: 98.9225% ( 27) 00:07:58.585 17.034 - 17.132: 98.9991% ( 14) 00:07:58.585 17.132 - 17.231: 99.0811% ( 15) 00:07:58.585 17.231 - 17.329: 99.1905% ( 20) 00:07:58.585 17.329 - 17.428: 99.2780% ( 16) 00:07:58.585 17.428 - 17.526: 99.3819% ( 19) 00:07:58.585 17.526 - 17.625: 99.4476% ( 12) 00:07:58.585 17.625 - 17.723: 99.4804% ( 6) 00:07:58.585 17.723 - 17.822: 99.5296% ( 9) 00:07:58.585 17.822 - 17.920: 99.5515% ( 4) 00:07:58.585 17.920 - 18.018: 99.5788% ( 5) 00:07:58.585 18.018 - 18.117: 99.6226% ( 8) 00:07:58.585 18.117 - 18.215: 99.6718% ( 9) 00:07:58.585 18.215 - 18.314: 99.6882% ( 3) 00:07:58.585 18.314 - 18.412: 99.7211% ( 6) 00:07:58.585 18.412 - 18.511: 99.7265% ( 1) 00:07:58.585 18.511 - 18.609: 99.7484% ( 4) 00:07:58.585 18.609 - 18.708: 99.7593% ( 2) 00:07:58.585 18.708 - 18.806: 99.7703% ( 2) 00:07:58.585 18.806 - 18.905: 99.7812% ( 2) 00:07:58.585 18.905 - 19.003: 99.7867% ( 1) 00:07:58.585 19.200 - 19.298: 99.7922% ( 1) 00:07:58.585 19.495 - 19.594: 99.8031% ( 2) 00:07:58.585 19.692 - 19.791: 99.8140% ( 2) 00:07:58.585 19.791 - 19.889: 99.8195% ( 1) 00:07:58.585 20.677 - 20.775: 99.8414% ( 4) 00:07:58.585 20.775 - 20.874: 99.8523% ( 2) 00:07:58.585 20.874 - 20.972: 99.8578% ( 1) 00:07:58.585 21.366 - 21.465: 99.8633% ( 1) 00:07:58.585 21.465 - 21.563: 99.8687% ( 1) 00:07:58.585 21.563 - 21.662: 99.8797% ( 2) 00:07:58.585 21.662 - 21.760: 99.8851% ( 1) 00:07:58.585 21.957 - 22.055: 99.8906% ( 1) 00:07:58.585 22.154 - 22.252: 99.9015% ( 2) 00:07:58.585 22.252 - 22.351: 99.9125% ( 2) 00:07:58.585 22.449 - 22.548: 99.9180% ( 1) 00:07:58.585 22.548 - 22.646: 99.9234% ( 1) 00:07:58.585 22.646 - 22.745: 99.9289% ( 1) 00:07:58.585 22.745 - 22.843: 99.9344% ( 1) 00:07:58.585 23.040 - 23.138: 99.9398% ( 1) 00:07:58.585 23.335 - 23.434: 99.9453% ( 1) 00:07:58.585 23.532 - 23.631: 99.9508% ( 1) 00:07:58.585 23.729 - 23.828: 99.9562% ( 1) 00:07:58.585 24.517 - 24.615: 99.9617% ( 1) 00:07:58.585 24.812 - 24.911: 99.9672% ( 1) 00:07:58.585 24.911 - 25.009: 99.9727% ( 1) 00:07:58.585 25.108 - 25.206: 99.9781% ( 1) 00:07:58.585 28.751 - 28.948: 99.9836% ( 1) 00:07:58.585 32.886 - 33.083: 99.9891% ( 1) 00:07:58.585 293.022 - 294.597: 99.9945% ( 1) 00:07:58.585 319.803 - 321.378: 100.0000% ( 1) 00:07:58.585 00:07:58.585 Complete histogram 00:07:58.585 ================== 00:07:58.585 Range in us Cumulative Count 00:07:58.585 7.237 - 7.286: 0.0109% ( 2) 00:07:58.585 7.286 - 7.335: 0.1969% ( 34) 00:07:58.585 7.335 - 7.385: 1.9198% ( 315) 00:07:58.585 7.385 - 7.434: 11.0977% ( 1678) 00:07:58.585 7.434 - 7.483: 33.4682% ( 4090) 00:07:58.585 7.483 - 7.532: 59.0713% ( 4681) 00:07:58.585 7.532 - 7.582: 77.9905% ( 3459) 00:07:58.585 7.582 - 7.631: 87.9068% ( 1813) 00:07:58.585 7.631 - 7.680: 92.5778% ( 854) 00:07:58.585 7.680 - 7.729: 94.6781% ( 384) 00:07:58.585 7.729 - 7.778: 95.6736% ( 182) 00:07:58.585 7.778 - 7.828: 96.0674% ( 72) 00:07:58.585 7.828 - 7.877: 96.3299% ( 48) 00:07:58.585 7.877 - 7.926: 96.4557% ( 23) 00:07:58.585 7.926 - 7.975: 96.5323% ( 14) 00:07:58.585 7.975 - 8.025: 96.5870% ( 10) 00:07:58.585 8.025 - 8.074: 96.6253% ( 7) 00:07:58.585 8.074 - 8.123: 96.7401% ( 21) 00:07:58.585 8.123 - 8.172: 96.9152% ( 32) 00:07:58.585 8.172 - 8.222: 97.0628% ( 27) 00:07:58.585 8.222 - 8.271: 97.2762% ( 39) 00:07:58.585 8.271 - 8.320: 97.4949% ( 40) 00:07:58.585 8.320 - 8.369: 97.7192% ( 41) 00:07:58.585 8.369 - 8.418: 97.9216% ( 37) 00:07:58.585 8.418 - 8.468: 98.0692% ( 27) 00:07:58.585 8.468 - 8.517: 98.1568% ( 16) 00:07:58.585 8.517 - 8.566: 98.2115% ( 10) 00:07:58.585 8.566 - 8.615: 98.2388% ( 5) 00:07:58.585 8.615 - 8.665: 98.2607% ( 4) 00:07:58.585 8.665 - 8.714: 98.2771% ( 3) 00:07:58.585 8.714 - 8.763: 98.2880% ( 2) 00:07:58.585 8.763 - 8.812: 98.2935% ( 1) 00:07:58.585 8.911 - 8.960: 98.3099% ( 3) 00:07:58.585 9.108 - 9.157: 98.3154% ( 1) 00:07:58.585 9.157 - 9.206: 98.3208% ( 1) 00:07:58.585 9.255 - 9.305: 98.3263% ( 1) 00:07:58.585 9.305 - 9.354: 98.3318% ( 1) 00:07:58.585 9.354 - 9.403: 98.3373% ( 1) 00:07:58.585 9.403 - 9.452: 98.3427% ( 1) 00:07:58.585 9.452 - 9.502: 98.3482% ( 1) 00:07:58.585 9.748 - 9.797: 98.3591% ( 2) 00:07:58.585 9.797 - 9.846: 98.3646% ( 1) 00:07:58.585 9.846 - 9.895: 98.3810% ( 3) 00:07:58.585 9.945 - 9.994: 98.4029% ( 4) 00:07:58.585 9.994 - 10.043: 98.4193% ( 3) 00:07:58.585 10.043 - 10.092: 98.4302% ( 2) 00:07:58.585 10.092 - 10.142: 98.4466% ( 3) 00:07:58.585 10.142 - 10.191: 98.4685% ( 4) 00:07:58.585 10.191 - 10.240: 98.4740% ( 1) 00:07:58.585 10.240 - 10.289: 98.4959% ( 4) 00:07:58.585 10.289 - 10.338: 98.5177% ( 4) 00:07:58.585 10.338 - 10.388: 98.5342% ( 3) 00:07:58.585 10.388 - 10.437: 98.5451% ( 2) 00:07:58.585 10.486 - 10.535: 98.5506% ( 1) 00:07:58.585 10.535 - 10.585: 98.5670% ( 3) 00:07:58.585 10.585 - 10.634: 98.5724% ( 1) 00:07:58.585 10.634 - 10.683: 98.5889% ( 3) 00:07:58.585 10.732 - 10.782: 98.5998% ( 2) 00:07:58.585 10.831 - 10.880: 98.6107% ( 2) 00:07:58.585 10.880 - 10.929: 98.6217% ( 2) 00:07:58.585 10.929 - 10.978: 98.6271% ( 1) 00:07:58.585 10.978 - 11.028: 98.6326% ( 1) 00:07:58.585 11.175 - 11.225: 98.6381% ( 1) 00:07:58.585 11.225 - 11.274: 98.6435% ( 1) 00:07:58.585 11.323 - 11.372: 98.6490% ( 1) 00:07:58.585 11.471 - 11.520: 98.6545% ( 1) 00:07:58.585 11.717 - 11.766: 98.6600% ( 1) 00:07:58.585 12.062 - 12.111: 98.6654% ( 1) 00:07:58.585 12.505 - 12.554: 98.6709% ( 1) 00:07:58.585 12.554 - 12.603: 98.6818% ( 2) 00:07:58.585 12.800 - 12.898: 98.6928% ( 2) 00:07:58.585 12.898 - 12.997: 98.7365% ( 8) 00:07:58.585 12.997 - 13.095: 98.7639% ( 5) 00:07:58.585 13.095 - 13.194: 98.8678% ( 19) 00:07:58.585 13.194 - 13.292: 98.9170% ( 9) 00:07:58.585 13.292 - 13.391: 98.9991% ( 15) 00:07:58.585 13.391 - 13.489: 99.0592% ( 11) 00:07:58.585 13.489 - 13.588: 99.1303% ( 13) 00:07:58.585 13.588 - 13.686: 99.2069% ( 14) 00:07:58.586 13.686 - 13.785: 99.2944% ( 16) 00:07:58.586 13.785 - 13.883: 99.3655% ( 13) 00:07:58.586 13.883 - 13.982: 99.4585% ( 17) 00:07:58.586 13.982 - 14.080: 99.4804% ( 4) 00:07:58.586 14.080 - 14.178: 99.5296% ( 9) 00:07:58.586 14.178 - 14.277: 99.5843% ( 10) 00:07:58.586 14.277 - 14.375: 99.6226% ( 7) 00:07:58.586 14.375 - 14.474: 99.6718% ( 9) 00:07:58.586 14.474 - 14.572: 99.6773% ( 1) 00:07:58.586 14.572 - 14.671: 99.6937% ( 3) 00:07:58.586 14.671 - 14.769: 99.7156% ( 4) 00:07:58.586 14.769 - 14.868: 99.7320% ( 3) 00:07:58.586 14.868 - 14.966: 99.7539% ( 4) 00:07:58.586 14.966 - 15.065: 99.7593% ( 1) 00:07:58.586 15.163 - 15.262: 99.7757% ( 3) 00:07:58.586 15.262 - 15.360: 99.7867% ( 2) 00:07:58.586 15.360 - 15.458: 99.7976% ( 2) 00:07:58.586 15.557 - 15.655: 99.8086% ( 2) 00:07:58.586 15.655 - 15.754: 99.8140% ( 1) 00:07:58.586 15.754 - 15.852: 99.8250% ( 2) 00:07:58.586 15.852 - 15.951: 99.8359% ( 2) 00:07:58.586 16.640 - 16.738: 99.8469% ( 2) 00:07:58.586 16.738 - 16.837: 99.8523% ( 1) 00:07:58.586 16.837 - 16.935: 99.8578% ( 1) 00:07:58.586 16.935 - 17.034: 99.8633% ( 1) 00:07:58.586 17.034 - 17.132: 99.8687% ( 1) 00:07:58.586 17.231 - 17.329: 99.8742% ( 1) 00:07:58.586 17.329 - 17.428: 99.8851% ( 2) 00:07:58.586 17.723 - 17.822: 99.8906% ( 1) 00:07:58.586 17.822 - 17.920: 99.8961% ( 1) 00:07:58.586 18.314 - 18.412: 99.9070% ( 2) 00:07:58.586 18.412 - 18.511: 99.9125% ( 1) 00:07:58.586 18.905 - 19.003: 99.9180% ( 1) 00:07:58.586 19.003 - 19.102: 99.9234% ( 1) 00:07:58.586 19.200 - 19.298: 99.9289% ( 1) 00:07:58.586 19.397 - 19.495: 99.9344% ( 1) 00:07:58.586 19.594 - 19.692: 99.9398% ( 1) 00:07:58.586 19.692 - 19.791: 99.9453% ( 1) 00:07:58.586 19.889 - 19.988: 99.9508% ( 1) 00:07:58.586 20.677 - 20.775: 99.9562% ( 1) 00:07:58.586 23.040 - 23.138: 99.9617% ( 1) 00:07:58.586 23.434 - 23.532: 99.9672% ( 1) 00:07:58.586 23.532 - 23.631: 99.9727% ( 1) 00:07:58.586 24.025 - 24.123: 99.9781% ( 1) 00:07:58.586 37.809 - 38.006: 99.9836% ( 1) 00:07:58.586 41.748 - 41.945: 99.9891% ( 1) 00:07:58.586 51.200 - 51.594: 99.9945% ( 1) 00:07:58.586 64.197 - 64.591: 100.0000% ( 1) 00:07:58.586 00:07:58.586 ************************************ 00:07:58.586 END TEST nvme_overhead 00:07:58.586 ************************************ 00:07:58.586 00:07:58.586 real 0m1.178s 00:07:58.586 user 0m1.061s 00:07:58.586 sys 0m0.073s 00:07:58.586 01:07:31 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.586 01:07:31 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:58.586 01:07:31 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:58.586 01:07:31 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:58.586 01:07:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.586 01:07:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.586 ************************************ 00:07:58.586 START TEST nvme_arbitration 00:07:58.586 ************************************ 00:07:58.586 01:07:31 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:01.867 Initializing NVMe Controllers 00:08:01.867 Attached to 0000:00:13.0 00:08:01.867 Attached to 0000:00:10.0 00:08:01.867 Attached to 0000:00:11.0 00:08:01.867 Attached to 0000:00:12.0 00:08:01.867 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:08:01.867 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:08:01.867 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:08:01.867 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:01.867 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:01.867 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:01.867 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:01.867 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:01.867 Initialization complete. Launching workers. 00:08:01.867 Starting thread on core 1 with urgent priority queue 00:08:01.867 Starting thread on core 2 with urgent priority queue 00:08:01.867 Starting thread on core 3 with urgent priority queue 00:08:01.867 Starting thread on core 0 with urgent priority queue 00:08:01.867 QEMU NVMe Ctrl (12343 ) core 0: 6889.33 IO/s 14.52 secs/100000 ios 00:08:01.867 QEMU NVMe Ctrl (12342 ) core 0: 6954.67 IO/s 14.38 secs/100000 ios 00:08:01.867 QEMU NVMe Ctrl (12340 ) core 1: 6976.00 IO/s 14.33 secs/100000 ios 00:08:01.867 QEMU NVMe Ctrl (12342 ) core 1: 6976.00 IO/s 14.33 secs/100000 ios 00:08:01.867 QEMU NVMe Ctrl (12341 ) core 2: 6677.33 IO/s 14.98 secs/100000 ios 00:08:01.867 QEMU NVMe Ctrl (12342 ) core 3: 6592.00 IO/s 15.17 secs/100000 ios 00:08:01.867 ======================================================== 00:08:01.867 00:08:01.867 ************************************ 00:08:01.867 END TEST nvme_arbitration 00:08:01.867 ************************************ 00:08:01.867 00:08:01.867 real 0m3.211s 00:08:01.867 user 0m9.040s 00:08:01.867 sys 0m0.088s 00:08:01.867 01:07:35 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.867 01:07:35 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:01.867 01:07:35 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:01.867 01:07:35 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:01.867 01:07:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.867 01:07:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.867 ************************************ 00:08:01.867 START TEST nvme_single_aen 00:08:01.867 ************************************ 00:08:01.867 01:07:35 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:01.867 Asynchronous Event Request test 00:08:01.867 Attached to 0000:00:13.0 00:08:01.867 Attached to 0000:00:10.0 00:08:01.867 Attached to 0000:00:11.0 00:08:01.867 Attached to 0000:00:12.0 00:08:01.867 Reset controller to setup AER completions for this process 00:08:01.867 Registering asynchronous event callbacks... 00:08:01.867 Getting orig temperature thresholds of all controllers 00:08:01.867 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:01.867 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:01.867 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:01.867 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:01.867 Setting all controllers temperature threshold low to trigger AER 00:08:01.867 Waiting for all controllers temperature threshold to be set lower 00:08:01.868 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:01.868 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:01.868 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:01.868 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:01.868 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:01.868 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:01.868 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:01.868 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:01.868 Waiting for all controllers to trigger AER and reset threshold 00:08:01.868 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.868 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.868 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.868 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.868 Cleaning up... 00:08:01.868 00:08:01.868 real 0m0.173s 00:08:01.868 user 0m0.065s 00:08:01.868 sys 0m0.073s 00:08:01.868 01:07:35 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.868 01:07:35 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:01.868 ************************************ 00:08:01.868 END TEST nvme_single_aen 00:08:01.868 ************************************ 00:08:01.868 01:07:35 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:01.868 01:07:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.868 01:07:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.868 01:07:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.868 ************************************ 00:08:01.868 START TEST nvme_doorbell_aers 00:08:01.868 ************************************ 00:08:01.868 01:07:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:01.868 01:07:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:01.868 01:07:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:01.868 01:07:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:01.868 01:07:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:01.868 01:07:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:01.868 01:07:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:01.868 01:07:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:01.868 01:07:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:01.868 01:07:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:01.868 01:07:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:01.868 01:07:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:01.868 01:07:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:01.868 01:07:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:02.126 [2024-12-14 01:07:35.610226] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76459) is not found. Dropping the request. 00:08:12.116 Executing: test_write_invalid_db 00:08:12.116 Waiting for AER completion... 00:08:12.116 Failure: test_write_invalid_db 00:08:12.116 00:08:12.116 Executing: test_invalid_db_write_overflow_sq 00:08:12.116 Waiting for AER completion... 00:08:12.116 Failure: test_invalid_db_write_overflow_sq 00:08:12.116 00:08:12.116 Executing: test_invalid_db_write_overflow_cq 00:08:12.116 Waiting for AER completion... 00:08:12.116 Failure: test_invalid_db_write_overflow_cq 00:08:12.116 00:08:12.116 01:07:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:12.116 01:07:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:12.116 [2024-12-14 01:07:45.616865] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76459) is not found. Dropping the request. 00:08:22.085 Executing: test_write_invalid_db 00:08:22.085 Waiting for AER completion... 00:08:22.085 Failure: test_write_invalid_db 00:08:22.085 00:08:22.085 Executing: test_invalid_db_write_overflow_sq 00:08:22.085 Waiting for AER completion... 00:08:22.085 Failure: test_invalid_db_write_overflow_sq 00:08:22.085 00:08:22.085 Executing: test_invalid_db_write_overflow_cq 00:08:22.085 Waiting for AER completion... 00:08:22.085 Failure: test_invalid_db_write_overflow_cq 00:08:22.085 00:08:22.085 01:07:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:22.085 01:07:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:22.085 [2024-12-14 01:07:55.674418] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76459) is not found. Dropping the request. 00:08:32.091 Executing: test_write_invalid_db 00:08:32.091 Waiting for AER completion... 00:08:32.091 Failure: test_write_invalid_db 00:08:32.091 00:08:32.091 Executing: test_invalid_db_write_overflow_sq 00:08:32.091 Waiting for AER completion... 00:08:32.091 Failure: test_invalid_db_write_overflow_sq 00:08:32.091 00:08:32.091 Executing: test_invalid_db_write_overflow_cq 00:08:32.091 Waiting for AER completion... 00:08:32.091 Failure: test_invalid_db_write_overflow_cq 00:08:32.091 00:08:32.091 01:08:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:32.091 01:08:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:32.091 [2024-12-14 01:08:05.690328] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76459) is not found. Dropping the request. 00:08:42.063 Executing: test_write_invalid_db 00:08:42.063 Waiting for AER completion... 00:08:42.063 Failure: test_write_invalid_db 00:08:42.063 00:08:42.063 Executing: test_invalid_db_write_overflow_sq 00:08:42.063 Waiting for AER completion... 00:08:42.063 Failure: test_invalid_db_write_overflow_sq 00:08:42.063 00:08:42.063 Executing: test_invalid_db_write_overflow_cq 00:08:42.063 Waiting for AER completion... 00:08:42.063 Failure: test_invalid_db_write_overflow_cq 00:08:42.063 00:08:42.063 00:08:42.063 real 0m40.176s 00:08:42.063 user 0m34.095s 00:08:42.063 sys 0m5.729s 00:08:42.063 01:08:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.063 01:08:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:42.063 ************************************ 00:08:42.063 END TEST nvme_doorbell_aers 00:08:42.063 ************************************ 00:08:42.063 01:08:15 nvme -- nvme/nvme.sh@97 -- # uname 00:08:42.063 01:08:15 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:42.063 01:08:15 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:42.063 01:08:15 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:42.063 01:08:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.063 01:08:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:42.063 ************************************ 00:08:42.063 START TEST nvme_multi_aen 00:08:42.063 ************************************ 00:08:42.063 01:08:15 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:42.323 [2024-12-14 01:08:15.732586] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76459) is not found. Dropping the request. 00:08:42.323 [2024-12-14 01:08:15.732651] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76459) is not found. Dropping the request. 00:08:42.323 [2024-12-14 01:08:15.732662] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76459) is not found. Dropping the request. 00:08:42.323 [2024-12-14 01:08:15.734039] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76459) is not found. Dropping the request. 00:08:42.323 [2024-12-14 01:08:15.734068] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76459) is not found. Dropping the request. 00:08:42.323 [2024-12-14 01:08:15.734075] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76459) is not found. Dropping the request. 00:08:42.323 [2024-12-14 01:08:15.735033] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76459) is not found. Dropping the request. 00:08:42.323 [2024-12-14 01:08:15.735057] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76459) is not found. Dropping the request. 00:08:42.324 [2024-12-14 01:08:15.735064] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76459) is not found. Dropping the request. 00:08:42.324 [2024-12-14 01:08:15.735986] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76459) is not found. Dropping the request. 00:08:42.324 [2024-12-14 01:08:15.736010] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76459) is not found. Dropping the request. 00:08:42.324 [2024-12-14 01:08:15.736017] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76459) is not found. Dropping the request. 00:08:42.324 Child process pid: 76979 00:08:42.324 [Child] Asynchronous Event Request test 00:08:42.324 [Child] Attached to 0000:00:13.0 00:08:42.324 [Child] Attached to 0000:00:10.0 00:08:42.324 [Child] Attached to 0000:00:11.0 00:08:42.324 [Child] Attached to 0000:00:12.0 00:08:42.324 [Child] Registering asynchronous event callbacks... 00:08:42.324 [Child] Getting orig temperature thresholds of all controllers 00:08:42.324 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.324 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.324 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.324 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.324 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:42.324 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.324 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.324 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.324 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.324 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.324 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.324 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.324 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.324 [Child] Cleaning up... 00:08:42.324 Asynchronous Event Request test 00:08:42.324 Attached to 0000:00:13.0 00:08:42.324 Attached to 0000:00:10.0 00:08:42.324 Attached to 0000:00:11.0 00:08:42.324 Attached to 0000:00:12.0 00:08:42.324 Reset controller to setup AER completions for this process 00:08:42.324 Registering asynchronous event callbacks... 00:08:42.324 Getting orig temperature thresholds of all controllers 00:08:42.324 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.324 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.324 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.324 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.324 Setting all controllers temperature threshold low to trigger AER 00:08:42.324 Waiting for all controllers temperature threshold to be set lower 00:08:42.324 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.324 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:42.324 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.324 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:42.324 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.324 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:42.324 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.324 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:42.324 Waiting for all controllers to trigger AER and reset threshold 00:08:42.324 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.324 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.324 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.324 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.324 Cleaning up... 00:08:42.585 00:08:42.585 real 0m0.353s 00:08:42.585 user 0m0.121s 00:08:42.585 sys 0m0.130s 00:08:42.585 01:08:15 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.585 01:08:15 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:42.585 ************************************ 00:08:42.585 END TEST nvme_multi_aen 00:08:42.585 ************************************ 00:08:42.585 01:08:15 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:42.585 01:08:15 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:42.585 01:08:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.585 01:08:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:42.585 ************************************ 00:08:42.585 START TEST nvme_startup 00:08:42.585 ************************************ 00:08:42.585 01:08:15 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:42.585 Initializing NVMe Controllers 00:08:42.585 Attached to 0000:00:13.0 00:08:42.585 Attached to 0000:00:10.0 00:08:42.585 Attached to 0000:00:11.0 00:08:42.585 Attached to 0000:00:12.0 00:08:42.585 Initialization complete. 00:08:42.585 Time used:113705.133 (us). 00:08:42.585 00:08:42.585 real 0m0.160s 00:08:42.585 user 0m0.052s 00:08:42.585 sys 0m0.069s 00:08:42.585 01:08:16 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.585 01:08:16 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:42.585 ************************************ 00:08:42.585 END TEST nvme_startup 00:08:42.585 ************************************ 00:08:42.585 01:08:16 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:42.585 01:08:16 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.585 01:08:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.585 01:08:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:42.585 ************************************ 00:08:42.585 START TEST nvme_multi_secondary 00:08:42.585 ************************************ 00:08:42.585 01:08:16 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:42.585 01:08:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=77030 00:08:42.585 01:08:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=77031 00:08:42.585 01:08:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:42.585 01:08:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:42.585 01:08:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:45.872 Initializing NVMe Controllers 00:08:45.872 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:45.872 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:45.872 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:45.872 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:45.872 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:45.872 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:45.872 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:45.872 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:45.872 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:45.872 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:45.872 Initialization complete. Launching workers. 00:08:45.872 ======================================================== 00:08:45.872 Latency(us) 00:08:45.872 Device Information : IOPS MiB/s Average min max 00:08:45.872 PCIE (0000:00:13.0) NSID 1 from core 1: 7597.42 29.68 2105.54 759.28 5833.59 00:08:45.872 PCIE (0000:00:10.0) NSID 1 from core 1: 7597.42 29.68 2104.65 727.06 5935.85 00:08:45.872 PCIE (0000:00:11.0) NSID 1 from core 1: 7597.42 29.68 2105.71 742.92 6056.30 00:08:45.872 PCIE (0000:00:12.0) NSID 1 from core 1: 7597.42 29.68 2105.79 782.11 5831.92 00:08:45.872 PCIE (0000:00:12.0) NSID 2 from core 1: 7597.42 29.68 2105.72 789.62 5715.64 00:08:45.872 PCIE (0000:00:12.0) NSID 3 from core 1: 7597.42 29.68 2105.78 755.96 5665.66 00:08:45.872 ======================================================== 00:08:45.872 Total : 45584.55 178.06 2105.53 727.06 6056.30 00:08:45.872 00:08:45.872 Initializing NVMe Controllers 00:08:45.872 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:45.872 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:45.872 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:45.872 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:45.872 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:45.872 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:45.872 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:45.872 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:45.872 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:45.872 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:45.872 Initialization complete. Launching workers. 00:08:45.872 ======================================================== 00:08:45.872 Latency(us) 00:08:45.872 Device Information : IOPS MiB/s Average min max 00:08:45.872 PCIE (0000:00:13.0) NSID 1 from core 2: 3169.93 12.38 5046.98 1081.69 21378.99 00:08:45.872 PCIE (0000:00:10.0) NSID 1 from core 2: 3169.93 12.38 5044.98 1048.26 25051.60 00:08:45.872 PCIE (0000:00:11.0) NSID 1 from core 2: 3169.93 12.38 5047.15 934.71 24755.95 00:08:45.872 PCIE (0000:00:12.0) NSID 1 from core 2: 3169.93 12.38 5047.22 966.94 20490.98 00:08:45.872 PCIE (0000:00:12.0) NSID 2 from core 2: 3169.93 12.38 5047.39 1079.53 20448.81 00:08:45.872 PCIE (0000:00:12.0) NSID 3 from core 2: 3169.93 12.38 5047.41 1073.54 20523.16 00:08:45.872 ======================================================== 00:08:45.872 Total : 19019.57 74.30 5046.86 934.71 25051.60 00:08:45.872 00:08:45.872 01:08:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 77030 00:08:48.398 Initializing NVMe Controllers 00:08:48.398 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:48.398 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:48.398 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:48.398 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:48.398 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:48.398 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:48.398 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:48.398 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:48.398 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:48.398 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:48.398 Initialization complete. Launching workers. 00:08:48.398 ======================================================== 00:08:48.398 Latency(us) 00:08:48.398 Device Information : IOPS MiB/s Average min max 00:08:48.398 PCIE (0000:00:13.0) NSID 1 from core 0: 11183.89 43.69 1430.20 673.17 5807.30 00:08:48.398 PCIE (0000:00:10.0) NSID 1 from core 0: 11183.89 43.69 1429.30 666.12 5729.40 00:08:48.398 PCIE (0000:00:11.0) NSID 1 from core 0: 11183.89 43.69 1430.10 670.91 5600.36 00:08:48.398 PCIE (0000:00:12.0) NSID 1 from core 0: 11183.89 43.69 1430.04 536.95 5356.80 00:08:48.398 PCIE (0000:00:12.0) NSID 2 from core 0: 11183.89 43.69 1429.98 474.25 5986.64 00:08:48.398 PCIE (0000:00:12.0) NSID 3 from core 0: 11183.89 43.69 1429.91 398.41 6183.87 00:08:48.398 ======================================================== 00:08:48.398 Total : 67103.37 262.12 1429.92 398.41 6183.87 00:08:48.398 00:08:48.398 01:08:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 77031 00:08:48.398 01:08:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=77100 00:08:48.398 01:08:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:48.398 01:08:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=77101 00:08:48.398 01:08:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:48.398 01:08:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:51.754 Initializing NVMe Controllers 00:08:51.754 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:51.754 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:51.754 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:51.754 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:51.754 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:51.754 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:51.754 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:51.754 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:51.754 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:51.754 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:51.754 Initialization complete. Launching workers. 00:08:51.754 ======================================================== 00:08:51.754 Latency(us) 00:08:51.754 Device Information : IOPS MiB/s Average min max 00:08:51.754 PCIE (0000:00:13.0) NSID 1 from core 0: 7053.03 27.55 2268.05 745.09 6133.32 00:08:51.754 PCIE (0000:00:10.0) NSID 1 from core 0: 7053.03 27.55 2267.49 724.30 6007.46 00:08:51.754 PCIE (0000:00:11.0) NSID 1 from core 0: 7053.03 27.55 2268.54 710.98 6034.85 00:08:51.754 PCIE (0000:00:12.0) NSID 1 from core 0: 7053.03 27.55 2268.53 726.56 6343.75 00:08:51.754 PCIE (0000:00:12.0) NSID 2 from core 0: 7053.03 27.55 2268.50 742.60 6676.09 00:08:51.754 PCIE (0000:00:12.0) NSID 3 from core 0: 7053.03 27.55 2268.50 741.73 6271.43 00:08:51.754 ======================================================== 00:08:51.754 Total : 42318.20 165.31 2268.27 710.98 6676.09 00:08:51.754 00:08:51.754 Initializing NVMe Controllers 00:08:51.754 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:51.754 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:51.754 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:51.754 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:51.754 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:51.754 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:51.754 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:51.754 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:51.754 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:51.754 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:51.754 Initialization complete. Launching workers. 00:08:51.754 ======================================================== 00:08:51.754 Latency(us) 00:08:51.754 Device Information : IOPS MiB/s Average min max 00:08:51.754 PCIE (0000:00:13.0) NSID 1 from core 1: 7039.13 27.50 2272.49 760.10 5957.15 00:08:51.754 PCIE (0000:00:10.0) NSID 1 from core 1: 7039.13 27.50 2271.60 751.22 6312.74 00:08:51.754 PCIE (0000:00:11.0) NSID 1 from core 1: 7039.13 27.50 2272.53 771.50 7111.24 00:08:51.754 PCIE (0000:00:12.0) NSID 1 from core 1: 7039.13 27.50 2272.43 770.69 6833.32 00:08:51.754 PCIE (0000:00:12.0) NSID 2 from core 1: 7039.13 27.50 2272.34 758.08 6655.67 00:08:51.754 PCIE (0000:00:12.0) NSID 3 from core 1: 7039.13 27.50 2272.25 761.13 6168.16 00:08:51.754 ======================================================== 00:08:51.754 Total : 42234.79 164.98 2272.27 751.22 7111.24 00:08:51.754 00:08:53.138 Initializing NVMe Controllers 00:08:53.139 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:53.139 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:53.139 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:53.139 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:53.139 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:53.139 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:53.139 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:53.139 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:53.139 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:53.139 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:53.139 Initialization complete. Launching workers. 00:08:53.139 ======================================================== 00:08:53.139 Latency(us) 00:08:53.139 Device Information : IOPS MiB/s Average min max 00:08:53.139 PCIE (0000:00:13.0) NSID 1 from core 2: 4207.09 16.43 3802.35 797.60 13333.49 00:08:53.139 PCIE (0000:00:10.0) NSID 1 from core 2: 4207.09 16.43 3800.36 755.62 13136.07 00:08:53.139 PCIE (0000:00:11.0) NSID 1 from core 2: 4207.09 16.43 3802.50 736.85 15709.78 00:08:53.139 PCIE (0000:00:12.0) NSID 1 from core 2: 4207.09 16.43 3802.44 615.65 12923.11 00:08:53.139 PCIE (0000:00:12.0) NSID 2 from core 2: 4207.09 16.43 3802.54 539.50 13178.73 00:08:53.139 PCIE (0000:00:12.0) NSID 3 from core 2: 4207.09 16.43 3802.48 452.06 13361.94 00:08:53.139 ======================================================== 00:08:53.139 Total : 25242.52 98.60 3802.11 452.06 15709.78 00:08:53.139 00:08:53.400 01:08:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 77100 00:08:53.400 01:08:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 77101 00:08:53.400 00:08:53.400 real 0m10.580s 00:08:53.400 user 0m18.287s 00:08:53.400 sys 0m0.484s 00:08:53.400 01:08:26 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.400 01:08:26 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:53.400 ************************************ 00:08:53.400 END TEST nvme_multi_secondary 00:08:53.400 ************************************ 00:08:53.400 01:08:26 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:53.400 01:08:26 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:53.400 01:08:26 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/76056 ]] 00:08:53.400 01:08:26 nvme -- common/autotest_common.sh@1094 -- # kill 76056 00:08:53.400 01:08:26 nvme -- common/autotest_common.sh@1095 -- # wait 76056 00:08:53.400 [2024-12-14 01:08:26.788034] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76978) is not found. Dropping the request. 00:08:53.400 [2024-12-14 01:08:26.788149] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76978) is not found. Dropping the request. 00:08:53.400 [2024-12-14 01:08:26.788179] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76978) is not found. Dropping the request. 00:08:53.400 [2024-12-14 01:08:26.788206] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76978) is not found. Dropping the request. 00:08:53.400 [2024-12-14 01:08:26.789050] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76978) is not found. Dropping the request. 00:08:53.400 [2024-12-14 01:08:26.789119] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76978) is not found. Dropping the request. 00:08:53.400 [2024-12-14 01:08:26.789146] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76978) is not found. Dropping the request. 00:08:53.400 [2024-12-14 01:08:26.789177] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76978) is not found. Dropping the request. 00:08:53.400 [2024-12-14 01:08:26.789932] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76978) is not found. Dropping the request. 00:08:53.400 [2024-12-14 01:08:26.790006] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76978) is not found. Dropping the request. 00:08:53.400 [2024-12-14 01:08:26.790034] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76978) is not found. Dropping the request. 00:08:53.400 [2024-12-14 01:08:26.790068] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76978) is not found. Dropping the request. 00:08:53.400 [2024-12-14 01:08:26.790810] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76978) is not found. Dropping the request. 00:08:53.400 [2024-12-14 01:08:26.790883] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76978) is not found. Dropping the request. 00:08:53.400 [2024-12-14 01:08:26.790915] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76978) is not found. Dropping the request. 00:08:53.400 [2024-12-14 01:08:26.790943] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76978) is not found. Dropping the request. 00:08:53.400 01:08:26 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:53.400 01:08:26 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:53.400 01:08:26 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:53.400 01:08:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:53.400 01:08:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.400 01:08:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:53.400 ************************************ 00:08:53.400 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:53.400 ************************************ 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:53.400 * Looking for test storage... 00:08:53.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.400 01:08:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:53.400 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.400 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:53.400 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:53.400 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.400 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:53.400 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.400 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.400 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.400 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:53.400 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.400 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:53.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.400 --rc genhtml_branch_coverage=1 00:08:53.400 --rc genhtml_function_coverage=1 00:08:53.400 --rc genhtml_legend=1 00:08:53.400 --rc geninfo_all_blocks=1 00:08:53.400 --rc geninfo_unexecuted_blocks=1 00:08:53.400 00:08:53.400 ' 00:08:53.400 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:53.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.400 --rc genhtml_branch_coverage=1 00:08:53.400 --rc genhtml_function_coverage=1 00:08:53.400 --rc genhtml_legend=1 00:08:53.400 --rc geninfo_all_blocks=1 00:08:53.400 --rc geninfo_unexecuted_blocks=1 00:08:53.400 00:08:53.400 ' 00:08:53.400 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:53.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.400 --rc genhtml_branch_coverage=1 00:08:53.400 --rc genhtml_function_coverage=1 00:08:53.400 --rc genhtml_legend=1 00:08:53.400 --rc geninfo_all_blocks=1 00:08:53.400 --rc geninfo_unexecuted_blocks=1 00:08:53.400 00:08:53.400 ' 00:08:53.400 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:53.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.401 --rc genhtml_branch_coverage=1 00:08:53.401 --rc genhtml_function_coverage=1 00:08:53.401 --rc genhtml_legend=1 00:08:53.401 --rc geninfo_all_blocks=1 00:08:53.401 --rc geninfo_unexecuted_blocks=1 00:08:53.401 00:08:53.401 ' 00:08:53.401 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:53.401 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:53.401 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:53.401 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:53.401 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:53.661 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:53.661 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=77262 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 77262 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 77262 ']' 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.662 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:53.662 [2024-12-14 01:08:27.156022] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:53.662 [2024-12-14 01:08:27.156414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77262 ] 00:08:53.922 [2024-12-14 01:08:27.317882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:53.922 [2024-12-14 01:08:27.351123] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.922 [2024-12-14 01:08:27.351466] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.922 [2024-12-14 01:08:27.351779] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.922 [2024-12-14 01:08:27.351827] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.493 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.493 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:54.493 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:54.493 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.493 01:08:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:54.493 nvme0n1 00:08:54.493 01:08:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.493 01:08:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:54.493 01:08:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_xoymy.txt 00:08:54.493 01:08:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:54.493 01:08:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.493 01:08:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:54.493 true 00:08:54.493 01:08:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.493 01:08:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:54.493 01:08:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1734138508 00:08:54.493 01:08:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:54.493 01:08:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=77285 00:08:54.493 01:08:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:54.493 01:08:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:57.035 [2024-12-14 01:08:30.080186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:57.035 [2024-12-14 01:08:30.080478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:57.035 [2024-12-14 01:08:30.080640] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:57.035 [2024-12-14 01:08:30.080700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.035 [2024-12-14 01:08:30.082109] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:57.035 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 77285 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 77285 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 77285 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_xoymy.txt 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:57.035 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_xoymy.txt 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 77262 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 77262 ']' 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 77262 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77262 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:57.036 killing process with pid 77262 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77262' 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 77262 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 77262 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:57.036 ************************************ 00:08:57.036 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:57.036 ************************************ 00:08:57.036 00:08:57.036 real 0m3.544s 00:08:57.036 user 0m12.606s 00:08:57.036 sys 0m0.486s 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.036 01:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:57.036 01:08:30 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:57.036 01:08:30 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:57.036 01:08:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.036 01:08:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.036 01:08:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.036 ************************************ 00:08:57.036 START TEST nvme_fio 00:08:57.036 ************************************ 00:08:57.036 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:57.036 01:08:30 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:57.036 01:08:30 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:57.036 01:08:30 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:57.036 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:57.036 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:57.036 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:57.036 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:57.036 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:57.036 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:57.036 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:57.036 01:08:30 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:57.036 01:08:30 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:57.036 01:08:30 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:57.036 01:08:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:57.036 01:08:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:57.296 01:08:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:57.296 01:08:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:57.556 01:08:30 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:57.556 01:08:30 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:57.556 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:57.556 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:57.556 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:57.556 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:57.556 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:57.556 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:57.556 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:57.556 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:57.556 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:57.556 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:57.556 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:57.556 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:57.556 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:57.556 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:57.556 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:57.556 01:08:30 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:57.556 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:57.556 fio-3.35 00:08:57.556 Starting 1 thread 00:09:04.171 00:09:04.171 test: (groupid=0, jobs=1): err= 0: pid=77413: Sat Dec 14 01:08:37 2024 00:09:04.171 read: IOPS=22.8k, BW=89.1MiB/s (93.5MB/s)(178MiB/2001msec) 00:09:04.171 slat (nsec): min=3368, max=49037, avg=4925.13, stdev=2042.29 00:09:04.171 clat (usec): min=473, max=10820, avg=2792.81, stdev=917.78 00:09:04.171 lat (usec): min=482, max=10853, avg=2797.74, stdev=918.87 00:09:04.171 clat percentiles (usec): 00:09:04.171 | 1.00th=[ 1795], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2343], 00:09:04.171 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2540], 00:09:04.171 | 70.00th=[ 2671], 80.00th=[ 2966], 90.00th=[ 4015], 95.00th=[ 5145], 00:09:04.171 | 99.00th=[ 6325], 99.50th=[ 6718], 99.90th=[ 7570], 99.95th=[ 8848], 00:09:04.171 | 99.99th=[10683] 00:09:04.171 bw ( KiB/s): min=81880, max=102864, per=98.14%, avg=89570.67, stdev=11559.61, samples=3 00:09:04.171 iops : min=20470, max=25716, avg=22392.67, stdev=2889.90, samples=3 00:09:04.171 write: IOPS=22.7k, BW=88.6MiB/s (92.9MB/s)(177MiB/2001msec); 0 zone resets 00:09:04.171 slat (nsec): min=3509, max=72761, avg=5125.49, stdev=2033.26 00:09:04.171 clat (usec): min=433, max=10755, avg=2817.11, stdev=938.44 00:09:04.171 lat (usec): min=442, max=10769, avg=2822.23, stdev=939.50 00:09:04.171 clat percentiles (usec): 00:09:04.171 | 1.00th=[ 1795], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2343], 00:09:04.171 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2540], 00:09:04.171 | 70.00th=[ 2704], 80.00th=[ 2999], 90.00th=[ 4080], 95.00th=[ 5145], 00:09:04.171 | 99.00th=[ 6456], 99.50th=[ 6783], 99.90th=[ 7963], 99.95th=[ 9110], 00:09:04.171 | 99.99th=[10552] 00:09:04.171 bw ( KiB/s): min=82072, max=103144, per=98.90%, avg=89714.67, stdev=11667.11, samples=3 00:09:04.171 iops : min=20518, max=25786, avg=22428.67, stdev=2916.78, samples=3 00:09:04.171 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:04.171 lat (msec) : 2=2.10%, 4=87.59%, 10=10.25%, 20=0.03% 00:09:04.171 cpu : usr=99.25%, sys=0.00%, ctx=4, majf=0, minf=625 00:09:04.171 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:04.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.171 issued rwts: total=45657,45379,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.171 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.171 00:09:04.171 Run status group 0 (all jobs): 00:09:04.171 READ: bw=89.1MiB/s (93.5MB/s), 89.1MiB/s-89.1MiB/s (93.5MB/s-93.5MB/s), io=178MiB (187MB), run=2001-2001msec 00:09:04.171 WRITE: bw=88.6MiB/s (92.9MB/s), 88.6MiB/s-88.6MiB/s (92.9MB/s-92.9MB/s), io=177MiB (186MB), run=2001-2001msec 00:09:04.172 ----------------------------------------------------- 00:09:04.172 Suppressions used: 00:09:04.172 count bytes template 00:09:04.172 1 32 /usr/src/fio/parse.c 00:09:04.172 1 8 libtcmalloc_minimal.so 00:09:04.172 ----------------------------------------------------- 00:09:04.172 00:09:04.172 01:08:37 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:04.172 01:08:37 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:04.172 01:08:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:04.172 01:08:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:04.172 01:08:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:04.172 01:08:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:04.172 01:08:37 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:04.172 01:08:37 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:04.172 01:08:37 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:04.172 01:08:37 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:04.172 01:08:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:04.172 01:08:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:04.172 01:08:37 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:04.172 01:08:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:04.172 01:08:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:04.172 01:08:37 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:04.172 01:08:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:04.172 01:08:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:04.172 01:08:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:04.172 01:08:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:04.172 01:08:37 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:04.172 01:08:37 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:04.172 01:08:37 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:04.172 01:08:37 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:04.432 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:04.432 fio-3.35 00:09:04.432 Starting 1 thread 00:09:11.019 00:09:11.019 test: (groupid=0, jobs=1): err= 0: pid=77469: Sat Dec 14 01:08:44 2024 00:09:11.019 read: IOPS=23.1k, BW=90.3MiB/s (94.7MB/s)(181MiB/2001msec) 00:09:11.019 slat (nsec): min=3350, max=58258, avg=4859.03, stdev=1938.68 00:09:11.019 clat (usec): min=309, max=9310, avg=2760.14, stdev=871.57 00:09:11.019 lat (usec): min=314, max=9354, avg=2764.99, stdev=872.56 00:09:11.019 clat percentiles (usec): 00:09:11.019 | 1.00th=[ 1565], 5.00th=[ 2057], 10.00th=[ 2180], 20.00th=[ 2311], 00:09:11.019 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2540], 00:09:11.019 | 70.00th=[ 2704], 80.00th=[ 2933], 90.00th=[ 3884], 95.00th=[ 4883], 00:09:11.019 | 99.00th=[ 6128], 99.50th=[ 6652], 99.90th=[ 7635], 99.95th=[ 8029], 00:09:11.019 | 99.99th=[ 9110] 00:09:11.019 bw ( KiB/s): min=81600, max=95088, per=97.81%, avg=90410.67, stdev=7635.11, samples=3 00:09:11.019 iops : min=20400, max=23770, avg=22602.67, stdev=1908.71, samples=3 00:09:11.019 write: IOPS=23.0k, BW=89.7MiB/s (94.1MB/s)(180MiB/2001msec); 0 zone resets 00:09:11.019 slat (nsec): min=3448, max=65924, avg=5099.13, stdev=2046.77 00:09:11.019 clat (usec): min=203, max=9165, avg=2776.72, stdev=871.85 00:09:11.019 lat (usec): min=208, max=9181, avg=2781.82, stdev=872.84 00:09:11.019 clat percentiles (usec): 00:09:11.019 | 1.00th=[ 1598], 5.00th=[ 2073], 10.00th=[ 2180], 20.00th=[ 2343], 00:09:11.019 | 30.00th=[ 2376], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2573], 00:09:11.019 | 70.00th=[ 2704], 80.00th=[ 2966], 90.00th=[ 3884], 95.00th=[ 4883], 00:09:11.019 | 99.00th=[ 6128], 99.50th=[ 6652], 99.90th=[ 7570], 99.95th=[ 8029], 00:09:11.019 | 99.99th=[ 8979] 00:09:11.019 bw ( KiB/s): min=83216, max=94424, per=98.64%, avg=90656.00, stdev=6443.41, samples=3 00:09:11.019 iops : min=20804, max=23606, avg=22664.00, stdev=1610.85, samples=3 00:09:11.019 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.03% 00:09:11.019 lat (msec) : 2=3.54%, 4=87.16%, 10=9.24% 00:09:11.019 cpu : usr=99.15%, sys=0.15%, ctx=13, majf=0, minf=626 00:09:11.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:11.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.019 issued rwts: total=46241,45974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.019 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.019 00:09:11.019 Run status group 0 (all jobs): 00:09:11.019 READ: bw=90.3MiB/s (94.7MB/s), 90.3MiB/s-90.3MiB/s (94.7MB/s-94.7MB/s), io=181MiB (189MB), run=2001-2001msec 00:09:11.019 WRITE: bw=89.7MiB/s (94.1MB/s), 89.7MiB/s-89.7MiB/s (94.1MB/s-94.1MB/s), io=180MiB (188MB), run=2001-2001msec 00:09:11.019 ----------------------------------------------------- 00:09:11.019 Suppressions used: 00:09:11.019 count bytes template 00:09:11.019 1 32 /usr/src/fio/parse.c 00:09:11.019 1 8 libtcmalloc_minimal.so 00:09:11.019 ----------------------------------------------------- 00:09:11.019 00:09:11.019 01:08:44 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:11.019 01:08:44 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:11.019 01:08:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:11.019 01:08:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:11.019 01:08:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:11.019 01:08:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:11.281 01:08:44 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:11.281 01:08:44 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:11.281 01:08:44 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:11.281 01:08:44 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:11.281 01:08:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:11.281 01:08:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:11.281 01:08:44 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:11.281 01:08:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:11.281 01:08:44 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:11.281 01:08:44 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:11.281 01:08:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:11.281 01:08:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:11.281 01:08:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:11.281 01:08:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:11.281 01:08:44 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:11.281 01:08:44 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:11.281 01:08:44 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:11.281 01:08:44 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:11.282 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:11.282 fio-3.35 00:09:11.282 Starting 1 thread 00:09:19.419 00:09:19.419 test: (groupid=0, jobs=1): err= 0: pid=77524: Sat Dec 14 01:08:52 2024 00:09:19.420 read: IOPS=24.0k, BW=93.7MiB/s (98.3MB/s)(188MiB/2001msec) 00:09:19.420 slat (nsec): min=3352, max=48158, avg=4873.80, stdev=1972.45 00:09:19.420 clat (usec): min=200, max=10847, avg=2667.05, stdev=802.74 00:09:19.420 lat (usec): min=205, max=10895, avg=2671.93, stdev=803.87 00:09:19.420 clat percentiles (usec): 00:09:19.420 | 1.00th=[ 1598], 5.00th=[ 2089], 10.00th=[ 2245], 20.00th=[ 2343], 00:09:19.420 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:09:19.420 | 70.00th=[ 2507], 80.00th=[ 2671], 90.00th=[ 3458], 95.00th=[ 4752], 00:09:19.420 | 99.00th=[ 5866], 99.50th=[ 6259], 99.90th=[ 7177], 99.95th=[ 8356], 00:09:19.420 | 99.99th=[10683] 00:09:19.420 bw ( KiB/s): min=96888, max=98576, per=100.00%, avg=97866.67, stdev=875.64, samples=3 00:09:19.420 iops : min=24222, max=24644, avg=24466.67, stdev=218.91, samples=3 00:09:19.420 write: IOPS=23.9k, BW=93.2MiB/s (97.7MB/s)(186MiB/2001msec); 0 zone resets 00:09:19.420 slat (nsec): min=3475, max=93852, avg=5117.37, stdev=2158.20 00:09:19.420 clat (usec): min=219, max=10751, avg=2664.61, stdev=799.25 00:09:19.420 lat (usec): min=224, max=10768, avg=2669.73, stdev=800.42 00:09:19.420 clat percentiles (usec): 00:09:19.420 | 1.00th=[ 1565], 5.00th=[ 2073], 10.00th=[ 2245], 20.00th=[ 2343], 00:09:19.420 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:09:19.420 | 70.00th=[ 2507], 80.00th=[ 2671], 90.00th=[ 3425], 95.00th=[ 4686], 00:09:19.420 | 99.00th=[ 5866], 99.50th=[ 6259], 99.90th=[ 7177], 99.95th=[ 8979], 00:09:19.420 | 99.99th=[10552] 00:09:19.420 bw ( KiB/s): min=96496, max=99208, per=100.00%, avg=97850.67, stdev=1356.00, samples=3 00:09:19.420 iops : min=24124, max=24802, avg=24462.67, stdev=339.00, samples=3 00:09:19.420 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:09:19.420 lat (msec) : 2=3.64%, 4=88.79%, 10=7.50%, 20=0.03% 00:09:19.420 cpu : usr=99.20%, sys=0.10%, ctx=2, majf=0, minf=626 00:09:19.420 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:19.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:19.420 issued rwts: total=48015,47735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.420 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:19.420 00:09:19.420 Run status group 0 (all jobs): 00:09:19.420 READ: bw=93.7MiB/s (98.3MB/s), 93.7MiB/s-93.7MiB/s (98.3MB/s-98.3MB/s), io=188MiB (197MB), run=2001-2001msec 00:09:19.420 WRITE: bw=93.2MiB/s (97.7MB/s), 93.2MiB/s-93.2MiB/s (97.7MB/s-97.7MB/s), io=186MiB (196MB), run=2001-2001msec 00:09:19.420 ----------------------------------------------------- 00:09:19.420 Suppressions used: 00:09:19.420 count bytes template 00:09:19.420 1 32 /usr/src/fio/parse.c 00:09:19.420 1 8 libtcmalloc_minimal.so 00:09:19.420 ----------------------------------------------------- 00:09:19.420 00:09:19.420 01:08:52 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:19.420 01:08:52 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:19.420 01:08:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:19.420 01:08:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:19.420 01:08:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:19.420 01:08:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:19.420 01:08:52 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:19.420 01:08:52 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:19.420 01:08:52 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:19.420 01:08:52 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:19.420 01:08:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:19.420 01:08:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:19.420 01:08:52 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:19.420 01:08:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:19.420 01:08:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:19.420 01:08:52 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:19.420 01:08:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:19.420 01:08:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:19.420 01:08:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:19.420 01:08:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:19.420 01:08:52 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:19.420 01:08:52 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:19.420 01:08:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:19.420 01:08:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:19.420 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:19.420 fio-3.35 00:09:19.420 Starting 1 thread 00:09:24.702 00:09:24.702 test: (groupid=0, jobs=1): err= 0: pid=77579: Sat Dec 14 01:08:57 2024 00:09:24.702 read: IOPS=19.3k, BW=75.4MiB/s (79.1MB/s)(151MiB/2001msec) 00:09:24.702 slat (nsec): min=3332, max=83334, avg=5444.66, stdev=2938.89 00:09:24.702 clat (usec): min=670, max=11092, avg=3291.30, stdev=1225.17 00:09:24.702 lat (usec): min=676, max=11152, avg=3296.74, stdev=1226.50 00:09:24.702 clat percentiles (usec): 00:09:24.702 | 1.00th=[ 1926], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2409], 00:09:24.702 | 30.00th=[ 2540], 40.00th=[ 2638], 50.00th=[ 2802], 60.00th=[ 2999], 00:09:24.702 | 70.00th=[ 3392], 80.00th=[ 4228], 90.00th=[ 5211], 95.00th=[ 6063], 00:09:24.702 | 99.00th=[ 6980], 99.50th=[ 7308], 99.90th=[ 8586], 99.95th=[ 9765], 00:09:24.702 | 99.99th=[10945] 00:09:24.702 bw ( KiB/s): min=76656, max=80600, per=100.00%, avg=78866.67, stdev=2014.86, samples=3 00:09:24.702 iops : min=19164, max=20150, avg=19716.67, stdev=503.72, samples=3 00:09:24.702 write: IOPS=19.3k, BW=75.3MiB/s (79.0MB/s)(151MiB/2001msec); 0 zone resets 00:09:24.702 slat (nsec): min=3489, max=71148, avg=5558.63, stdev=2825.26 00:09:24.702 clat (usec): min=659, max=11014, avg=3321.90, stdev=1230.85 00:09:24.702 lat (usec): min=665, max=11034, avg=3327.46, stdev=1232.14 00:09:24.702 clat percentiles (usec): 00:09:24.702 | 1.00th=[ 1942], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2442], 00:09:24.702 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2835], 60.00th=[ 3032], 00:09:24.702 | 70.00th=[ 3425], 80.00th=[ 4293], 90.00th=[ 5276], 95.00th=[ 6063], 00:09:24.702 | 99.00th=[ 6980], 99.50th=[ 7439], 99.90th=[ 8717], 99.95th=[ 9896], 00:09:24.702 | 99.99th=[10945] 00:09:24.702 bw ( KiB/s): min=76952, max=80760, per=100.00%, avg=78976.00, stdev=1915.31, samples=3 00:09:24.702 iops : min=19238, max=20190, avg=19744.00, stdev=478.83, samples=3 00:09:24.702 lat (usec) : 750=0.01% 00:09:24.702 lat (msec) : 2=1.34%, 4=76.03%, 10=22.58%, 20=0.05% 00:09:24.702 cpu : usr=98.90%, sys=0.10%, ctx=13, majf=0, minf=624 00:09:24.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:24.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.702 issued rwts: total=38638,38574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.702 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.702 00:09:24.702 Run status group 0 (all jobs): 00:09:24.702 READ: bw=75.4MiB/s (79.1MB/s), 75.4MiB/s-75.4MiB/s (79.1MB/s-79.1MB/s), io=151MiB (158MB), run=2001-2001msec 00:09:24.702 WRITE: bw=75.3MiB/s (79.0MB/s), 75.3MiB/s-75.3MiB/s (79.0MB/s-79.0MB/s), io=151MiB (158MB), run=2001-2001msec 00:09:24.702 ----------------------------------------------------- 00:09:24.702 Suppressions used: 00:09:24.702 count bytes template 00:09:24.702 1 32 /usr/src/fio/parse.c 00:09:24.702 1 8 libtcmalloc_minimal.so 00:09:24.702 ----------------------------------------------------- 00:09:24.702 00:09:24.702 01:08:57 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:24.702 01:08:57 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:24.702 00:09:24.702 real 0m27.363s 00:09:24.702 user 0m16.635s 00:09:24.702 sys 0m19.196s 00:09:24.702 01:08:57 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.702 ************************************ 00:09:24.702 END TEST nvme_fio 00:09:24.702 ************************************ 00:09:24.702 01:08:57 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:24.702 ************************************ 00:09:24.702 END TEST nvme 00:09:24.702 ************************************ 00:09:24.702 00:09:24.702 real 1m35.490s 00:09:24.702 user 3m32.302s 00:09:24.702 sys 0m29.293s 00:09:24.702 01:08:57 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.702 01:08:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:24.702 01:08:57 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:24.702 01:08:57 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:24.702 01:08:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.702 01:08:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.702 01:08:57 -- common/autotest_common.sh@10 -- # set +x 00:09:24.702 ************************************ 00:09:24.702 START TEST nvme_scc 00:09:24.702 ************************************ 00:09:24.702 01:08:57 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:24.702 * Looking for test storage... 00:09:24.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:24.702 01:08:58 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:24.702 01:08:58 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:24.702 01:08:58 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:24.702 01:08:58 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:24.702 01:08:58 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.702 01:08:58 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.702 01:08:58 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.702 01:08:58 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.702 01:08:58 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.702 01:08:58 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.702 01:08:58 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.702 01:08:58 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:24.703 01:08:58 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.703 01:08:58 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:24.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.703 --rc genhtml_branch_coverage=1 00:09:24.703 --rc genhtml_function_coverage=1 00:09:24.703 --rc genhtml_legend=1 00:09:24.703 --rc geninfo_all_blocks=1 00:09:24.703 --rc geninfo_unexecuted_blocks=1 00:09:24.703 00:09:24.703 ' 00:09:24.703 01:08:58 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:24.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.703 --rc genhtml_branch_coverage=1 00:09:24.703 --rc genhtml_function_coverage=1 00:09:24.703 --rc genhtml_legend=1 00:09:24.703 --rc geninfo_all_blocks=1 00:09:24.703 --rc geninfo_unexecuted_blocks=1 00:09:24.703 00:09:24.703 ' 00:09:24.703 01:08:58 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:24.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.703 --rc genhtml_branch_coverage=1 00:09:24.703 --rc genhtml_function_coverage=1 00:09:24.703 --rc genhtml_legend=1 00:09:24.703 --rc geninfo_all_blocks=1 00:09:24.703 --rc geninfo_unexecuted_blocks=1 00:09:24.703 00:09:24.703 ' 00:09:24.703 01:08:58 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:24.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.703 --rc genhtml_branch_coverage=1 00:09:24.703 --rc genhtml_function_coverage=1 00:09:24.703 --rc genhtml_legend=1 00:09:24.703 --rc geninfo_all_blocks=1 00:09:24.703 --rc geninfo_unexecuted_blocks=1 00:09:24.703 00:09:24.703 ' 00:09:24.703 01:08:58 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:24.703 01:08:58 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:24.703 01:08:58 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:24.703 01:08:58 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:24.703 01:08:58 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.703 01:08:58 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.703 01:08:58 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.703 01:08:58 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.703 01:08:58 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.703 01:08:58 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:24.703 01:08:58 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.703 01:08:58 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:24.703 01:08:58 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:24.703 01:08:58 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:24.703 01:08:58 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:24.703 01:08:58 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:24.703 01:08:58 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:24.703 01:08:58 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:24.703 01:08:58 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:24.703 01:08:58 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:24.703 01:08:58 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:24.703 01:08:58 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:24.703 01:08:58 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:24.703 01:08:58 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:24.703 01:08:58 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:24.963 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:25.223 Waiting for block devices as requested 00:09:25.223 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:25.223 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:25.223 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:25.484 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:30.836 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:30.836 01:09:03 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:30.836 01:09:03 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:30.836 01:09:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:30.837 01:09:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:30.837 01:09:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:30.837 01:09:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:30.837 01:09:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:30.837 01:09:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:30.837 01:09:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:30.837 01:09:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:30.837 01:09:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:30.837 01:09:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:30.837 01:09:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:30.837 01:09:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.837 01:09:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:30.837 01:09:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.837 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:30.838 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.839 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.840 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:30.841 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.842 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:30.843 01:09:04 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:30.843 01:09:04 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:30.843 01:09:04 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:30.843 01:09:04 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.843 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.844 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.845 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:30.846 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.847 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.848 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:30.849 01:09:04 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:30.849 01:09:04 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:30.849 01:09:04 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:30.849 01:09:04 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.849 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.850 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.851 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.852 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.853 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.854 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.855 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.856 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.857 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.858 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.859 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.860 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:30.861 01:09:04 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:30.861 01:09:04 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:30.861 01:09:04 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:30.861 01:09:04 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.861 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.862 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.863 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.125 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:31.126 01:09:04 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:31.126 01:09:04 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:31.126 01:09:04 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:31.126 01:09:04 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:31.126 01:09:04 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:31.388 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:31.962 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.962 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.962 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:32.220 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:32.220 01:09:05 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:32.220 01:09:05 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:32.220 01:09:05 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.220 01:09:05 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:32.220 ************************************ 00:09:32.220 START TEST nvme_simple_copy 00:09:32.220 ************************************ 00:09:32.220 01:09:05 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:32.478 Initializing NVMe Controllers 00:09:32.478 Attaching to 0000:00:10.0 00:09:32.478 Controller supports SCC. Attached to 0000:00:10.0 00:09:32.478 Namespace ID: 1 size: 6GB 00:09:32.478 Initialization complete. 00:09:32.478 00:09:32.478 Controller QEMU NVMe Ctrl (12340 ) 00:09:32.478 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:32.478 Namespace Block Size:4096 00:09:32.478 Writing LBAs 0 to 63 with Random Data 00:09:32.478 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:32.478 LBAs matching Written Data: 64 00:09:32.478 00:09:32.478 real 0m0.240s 00:09:32.478 user 0m0.097s 00:09:32.478 sys 0m0.041s 00:09:32.478 ************************************ 00:09:32.478 END TEST nvme_simple_copy 00:09:32.478 ************************************ 00:09:32.478 01:09:05 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.478 01:09:05 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:32.478 ************************************ 00:09:32.478 END TEST nvme_scc 00:09:32.478 ************************************ 00:09:32.478 00:09:32.478 real 0m7.991s 00:09:32.478 user 0m1.221s 00:09:32.478 sys 0m1.429s 00:09:32.478 01:09:05 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.478 01:09:05 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:32.478 01:09:05 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:32.478 01:09:05 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:32.478 01:09:05 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:32.478 01:09:05 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:32.478 01:09:05 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:32.478 01:09:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.478 01:09:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.478 01:09:05 -- common/autotest_common.sh@10 -- # set +x 00:09:32.478 ************************************ 00:09:32.478 START TEST nvme_fdp 00:09:32.478 ************************************ 00:09:32.478 01:09:05 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:32.478 * Looking for test storage... 00:09:32.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:32.479 01:09:06 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:32.479 01:09:06 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:32.479 01:09:06 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:32.737 01:09:06 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:32.737 01:09:06 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:32.737 01:09:06 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.737 01:09:06 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:32.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.737 --rc genhtml_branch_coverage=1 00:09:32.737 --rc genhtml_function_coverage=1 00:09:32.737 --rc genhtml_legend=1 00:09:32.737 --rc geninfo_all_blocks=1 00:09:32.737 --rc geninfo_unexecuted_blocks=1 00:09:32.737 00:09:32.737 ' 00:09:32.737 01:09:06 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:32.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.737 --rc genhtml_branch_coverage=1 00:09:32.737 --rc genhtml_function_coverage=1 00:09:32.737 --rc genhtml_legend=1 00:09:32.737 --rc geninfo_all_blocks=1 00:09:32.737 --rc geninfo_unexecuted_blocks=1 00:09:32.737 00:09:32.737 ' 00:09:32.737 01:09:06 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:32.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.737 --rc genhtml_branch_coverage=1 00:09:32.737 --rc genhtml_function_coverage=1 00:09:32.737 --rc genhtml_legend=1 00:09:32.737 --rc geninfo_all_blocks=1 00:09:32.737 --rc geninfo_unexecuted_blocks=1 00:09:32.737 00:09:32.737 ' 00:09:32.737 01:09:06 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:32.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.737 --rc genhtml_branch_coverage=1 00:09:32.737 --rc genhtml_function_coverage=1 00:09:32.737 --rc genhtml_legend=1 00:09:32.737 --rc geninfo_all_blocks=1 00:09:32.737 --rc geninfo_unexecuted_blocks=1 00:09:32.737 00:09:32.737 ' 00:09:32.737 01:09:06 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:32.737 01:09:06 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:32.737 01:09:06 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:32.737 01:09:06 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:32.738 01:09:06 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:32.738 01:09:06 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:32.738 01:09:06 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.738 01:09:06 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.738 01:09:06 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.738 01:09:06 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.738 01:09:06 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.738 01:09:06 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.738 01:09:06 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:32.738 01:09:06 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.738 01:09:06 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:32.738 01:09:06 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:32.738 01:09:06 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:32.738 01:09:06 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:32.738 01:09:06 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:32.738 01:09:06 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:32.738 01:09:06 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:32.738 01:09:06 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:32.738 01:09:06 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:32.738 01:09:06 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:32.738 01:09:06 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:32.996 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:32.996 Waiting for block devices as requested 00:09:33.254 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:33.254 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:33.254 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:33.254 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:38.534 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:38.534 01:09:11 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:38.534 01:09:11 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:38.534 01:09:11 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:38.534 01:09:11 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.534 01:09:11 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:38.534 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.535 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.536 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.537 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.538 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:38.539 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:38.540 01:09:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:38.540 01:09:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:38.540 01:09:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.540 01:09:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.540 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.541 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:38.542 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.543 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:38.544 01:09:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:38.545 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:38.814 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:38.815 01:09:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:38.815 01:09:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:38.815 01:09:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.815 01:09:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.815 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:38.816 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:38.817 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.818 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.819 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.820 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.821 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:38.822 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.823 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.824 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:38.825 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:38.826 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:38.827 01:09:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:38.827 01:09:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:38.827 01:09:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:38.827 01:09:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.828 01:09:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:38.828 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.829 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:38.830 01:09:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:38.831 01:09:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:38.831 01:09:12 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:38.831 01:09:12 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:38.831 01:09:12 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:38.831 01:09:12 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:39.092 01:09:12 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:39.092 01:09:12 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:39.092 01:09:12 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:39.092 01:09:12 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:39.353 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:39.922 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.922 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.922 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.922 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:40.181 01:09:13 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:40.181 01:09:13 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:40.181 01:09:13 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.181 01:09:13 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:40.181 ************************************ 00:09:40.181 START TEST nvme_flexible_data_placement 00:09:40.181 ************************************ 00:09:40.181 01:09:13 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:40.439 Initializing NVMe Controllers 00:09:40.439 Attaching to 0000:00:13.0 00:09:40.439 Controller supports FDP Attached to 0000:00:13.0 00:09:40.439 Namespace ID: 1 Endurance Group ID: 1 00:09:40.439 Initialization complete. 00:09:40.439 00:09:40.439 ================================== 00:09:40.439 == FDP tests for Namespace: #01 == 00:09:40.439 ================================== 00:09:40.439 00:09:40.439 Get Feature: FDP: 00:09:40.439 ================= 00:09:40.439 Enabled: Yes 00:09:40.439 FDP configuration Index: 0 00:09:40.439 00:09:40.439 FDP configurations log page 00:09:40.439 =========================== 00:09:40.439 Number of FDP configurations: 1 00:09:40.439 Version: 0 00:09:40.439 Size: 112 00:09:40.439 FDP Configuration Descriptor: 0 00:09:40.439 Descriptor Size: 96 00:09:40.439 Reclaim Group Identifier format: 2 00:09:40.439 FDP Volatile Write Cache: Not Present 00:09:40.439 FDP Configuration: Valid 00:09:40.439 Vendor Specific Size: 0 00:09:40.439 Number of Reclaim Groups: 2 00:09:40.439 Number of Recalim Unit Handles: 8 00:09:40.439 Max Placement Identifiers: 128 00:09:40.439 Number of Namespaces Suppprted: 256 00:09:40.439 Reclaim unit Nominal Size: 6000000 bytes 00:09:40.439 Estimated Reclaim Unit Time Limit: Not Reported 00:09:40.439 RUH Desc #000: RUH Type: Initially Isolated 00:09:40.439 RUH Desc #001: RUH Type: Initially Isolated 00:09:40.439 RUH Desc #002: RUH Type: Initially Isolated 00:09:40.439 RUH Desc #003: RUH Type: Initially Isolated 00:09:40.439 RUH Desc #004: RUH Type: Initially Isolated 00:09:40.439 RUH Desc #005: RUH Type: Initially Isolated 00:09:40.439 RUH Desc #006: RUH Type: Initially Isolated 00:09:40.439 RUH Desc #007: RUH Type: Initially Isolated 00:09:40.439 00:09:40.439 FDP reclaim unit handle usage log page 00:09:40.439 ====================================== 00:09:40.439 Number of Reclaim Unit Handles: 8 00:09:40.439 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:40.439 RUH Usage Desc #001: RUH Attributes: Unused 00:09:40.439 RUH Usage Desc #002: RUH Attributes: Unused 00:09:40.439 RUH Usage Desc #003: RUH Attributes: Unused 00:09:40.439 RUH Usage Desc #004: RUH Attributes: Unused 00:09:40.439 RUH Usage Desc #005: RUH Attributes: Unused 00:09:40.439 RUH Usage Desc #006: RUH Attributes: Unused 00:09:40.439 RUH Usage Desc #007: RUH Attributes: Unused 00:09:40.439 00:09:40.439 FDP statistics log page 00:09:40.439 ======================= 00:09:40.439 Host bytes with metadata written: 2235404288 00:09:40.439 Media bytes with metadata written: 2236612608 00:09:40.439 Media bytes erased: 0 00:09:40.439 00:09:40.439 FDP Reclaim unit handle status 00:09:40.439 ============================== 00:09:40.439 Number of RUHS descriptors: 2 00:09:40.439 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004c27 00:09:40.439 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:40.439 00:09:40.439 FDP write on placement id: 0 success 00:09:40.439 00:09:40.439 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:40.439 00:09:40.439 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:40.439 00:09:40.439 Get Feature: FDP Events for Placement handle: #0 00:09:40.439 ======================== 00:09:40.439 Number of FDP Events: 6 00:09:40.439 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:40.439 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:40.439 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:40.439 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:40.439 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:40.439 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:40.439 00:09:40.439 FDP events log page 00:09:40.439 =================== 00:09:40.439 Number of FDP events: 1 00:09:40.439 FDP Event #0: 00:09:40.439 Event Type: RU Not Written to Capacity 00:09:40.439 Placement Identifier: Valid 00:09:40.439 NSID: Valid 00:09:40.439 Location: Valid 00:09:40.439 Placement Identifier: 0 00:09:40.439 Event Timestamp: 5 00:09:40.439 Namespace Identifier: 1 00:09:40.439 Reclaim Group Identifier: 0 00:09:40.439 Reclaim Unit Handle Identifier: 0 00:09:40.439 00:09:40.439 FDP test passed 00:09:40.439 00:09:40.439 real 0m0.221s 00:09:40.439 user 0m0.064s 00:09:40.439 sys 0m0.056s 00:09:40.439 01:09:13 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.439 ************************************ 00:09:40.439 END TEST nvme_flexible_data_placement 00:09:40.439 ************************************ 00:09:40.439 01:09:13 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:40.439 00:09:40.439 real 0m7.878s 00:09:40.439 user 0m1.117s 00:09:40.439 sys 0m1.440s 00:09:40.439 ************************************ 00:09:40.439 END TEST nvme_fdp 00:09:40.439 ************************************ 00:09:40.439 01:09:13 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.439 01:09:13 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:40.439 01:09:13 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:40.439 01:09:13 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:40.439 01:09:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.439 01:09:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.439 01:09:13 -- common/autotest_common.sh@10 -- # set +x 00:09:40.439 ************************************ 00:09:40.439 START TEST nvme_rpc 00:09:40.439 ************************************ 00:09:40.439 01:09:13 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:40.439 * Looking for test storage... 00:09:40.439 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:40.439 01:09:14 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:40.439 01:09:14 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:40.439 01:09:14 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.700 01:09:14 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:40.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.700 --rc genhtml_branch_coverage=1 00:09:40.700 --rc genhtml_function_coverage=1 00:09:40.700 --rc genhtml_legend=1 00:09:40.700 --rc geninfo_all_blocks=1 00:09:40.700 --rc geninfo_unexecuted_blocks=1 00:09:40.700 00:09:40.700 ' 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:40.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.700 --rc genhtml_branch_coverage=1 00:09:40.700 --rc genhtml_function_coverage=1 00:09:40.700 --rc genhtml_legend=1 00:09:40.700 --rc geninfo_all_blocks=1 00:09:40.700 --rc geninfo_unexecuted_blocks=1 00:09:40.700 00:09:40.700 ' 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:40.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.700 --rc genhtml_branch_coverage=1 00:09:40.700 --rc genhtml_function_coverage=1 00:09:40.700 --rc genhtml_legend=1 00:09:40.700 --rc geninfo_all_blocks=1 00:09:40.700 --rc geninfo_unexecuted_blocks=1 00:09:40.700 00:09:40.700 ' 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:40.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.700 --rc genhtml_branch_coverage=1 00:09:40.700 --rc genhtml_function_coverage=1 00:09:40.700 --rc genhtml_legend=1 00:09:40.700 --rc geninfo_all_blocks=1 00:09:40.700 --rc geninfo_unexecuted_blocks=1 00:09:40.700 00:09:40.700 ' 00:09:40.700 01:09:14 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.700 01:09:14 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:40.700 01:09:14 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:40.700 01:09:14 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=78969 00:09:40.700 01:09:14 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:40.700 01:09:14 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:40.700 01:09:14 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 78969 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 78969 ']' 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.700 01:09:14 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.700 [2024-12-14 01:09:14.225744] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:40.700 [2024-12-14 01:09:14.226370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78969 ] 00:09:40.959 [2024-12-14 01:09:14.373577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:40.959 [2024-12-14 01:09:14.403591] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.959 [2024-12-14 01:09:14.403600] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.530 01:09:15 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.530 01:09:15 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:41.530 01:09:15 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:41.790 Nvme0n1 00:09:41.790 01:09:15 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:41.790 01:09:15 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:42.049 request: 00:09:42.049 { 00:09:42.049 "bdev_name": "Nvme0n1", 00:09:42.049 "filename": "non_existing_file", 00:09:42.050 "method": "bdev_nvme_apply_firmware", 00:09:42.050 "req_id": 1 00:09:42.050 } 00:09:42.050 Got JSON-RPC error response 00:09:42.050 response: 00:09:42.050 { 00:09:42.050 "code": -32603, 00:09:42.050 "message": "open file failed." 00:09:42.050 } 00:09:42.050 01:09:15 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:42.050 01:09:15 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:42.050 01:09:15 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:42.311 01:09:15 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:42.311 01:09:15 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 78969 00:09:42.311 01:09:15 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 78969 ']' 00:09:42.311 01:09:15 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 78969 00:09:42.311 01:09:15 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:42.311 01:09:15 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.311 01:09:15 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78969 00:09:42.311 01:09:15 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.311 killing process with pid 78969 00:09:42.311 01:09:15 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.311 01:09:15 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78969' 00:09:42.311 01:09:15 nvme_rpc -- common/autotest_common.sh@973 -- # kill 78969 00:09:42.311 01:09:15 nvme_rpc -- common/autotest_common.sh@978 -- # wait 78969 00:09:42.571 00:09:42.571 real 0m2.171s 00:09:42.571 user 0m4.153s 00:09:42.571 sys 0m0.578s 00:09:42.571 01:09:16 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.571 ************************************ 00:09:42.571 END TEST nvme_rpc 00:09:42.571 ************************************ 00:09:42.571 01:09:16 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.571 01:09:16 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:42.571 01:09:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:42.571 01:09:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.571 01:09:16 -- common/autotest_common.sh@10 -- # set +x 00:09:42.571 ************************************ 00:09:42.571 START TEST nvme_rpc_timeouts 00:09:42.571 ************************************ 00:09:42.571 01:09:16 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:42.830 * Looking for test storage... 00:09:42.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:42.830 01:09:16 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:42.830 01:09:16 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:42.830 01:09:16 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:09:42.830 01:09:16 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.830 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:42.831 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.831 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.831 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.831 01:09:16 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:42.831 01:09:16 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.831 01:09:16 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:42.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.831 --rc genhtml_branch_coverage=1 00:09:42.831 --rc genhtml_function_coverage=1 00:09:42.831 --rc genhtml_legend=1 00:09:42.831 --rc geninfo_all_blocks=1 00:09:42.831 --rc geninfo_unexecuted_blocks=1 00:09:42.831 00:09:42.831 ' 00:09:42.831 01:09:16 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:42.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.831 --rc genhtml_branch_coverage=1 00:09:42.831 --rc genhtml_function_coverage=1 00:09:42.831 --rc genhtml_legend=1 00:09:42.831 --rc geninfo_all_blocks=1 00:09:42.831 --rc geninfo_unexecuted_blocks=1 00:09:42.831 00:09:42.831 ' 00:09:42.831 01:09:16 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:42.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.831 --rc genhtml_branch_coverage=1 00:09:42.831 --rc genhtml_function_coverage=1 00:09:42.831 --rc genhtml_legend=1 00:09:42.831 --rc geninfo_all_blocks=1 00:09:42.831 --rc geninfo_unexecuted_blocks=1 00:09:42.831 00:09:42.831 ' 00:09:42.831 01:09:16 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:42.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.831 --rc genhtml_branch_coverage=1 00:09:42.831 --rc genhtml_function_coverage=1 00:09:42.831 --rc genhtml_legend=1 00:09:42.831 --rc geninfo_all_blocks=1 00:09:42.831 --rc geninfo_unexecuted_blocks=1 00:09:42.831 00:09:42.831 ' 00:09:42.831 01:09:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:42.831 01:09:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_79023 00:09:42.831 01:09:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_79023 00:09:42.831 01:09:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=79055 00:09:42.831 01:09:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:42.831 01:09:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 79055 00:09:42.831 01:09:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:42.831 01:09:16 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 79055 ']' 00:09:42.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.831 01:09:16 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.831 01:09:16 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.831 01:09:16 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.831 01:09:16 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.831 01:09:16 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:42.831 [2024-12-14 01:09:16.405679] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:42.831 [2024-12-14 01:09:16.405839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79055 ] 00:09:43.092 [2024-12-14 01:09:16.552420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:43.092 [2024-12-14 01:09:16.582497] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.092 [2024-12-14 01:09:16.582552] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.661 01:09:17 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.661 01:09:17 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:43.661 Checking default timeout settings: 00:09:43.661 01:09:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:43.661 01:09:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:44.231 Making settings changes with rpc: 00:09:44.231 01:09:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:44.231 01:09:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:44.231 Check default vs. modified settings: 00:09:44.231 01:09:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:44.231 01:09:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_79023 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_79023 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:44.801 Setting action_on_timeout is changed as expected. 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_79023 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_79023 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:44.801 Setting timeout_us is changed as expected. 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_79023 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_79023 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:44.801 Setting timeout_admin_us is changed as expected. 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_79023 /tmp/settings_modified_79023 00:09:44.801 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 79055 00:09:44.801 01:09:18 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 79055 ']' 00:09:44.801 01:09:18 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 79055 00:09:44.801 01:09:18 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:44.801 01:09:18 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.801 01:09:18 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79055 00:09:44.801 01:09:18 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.801 01:09:18 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.801 killing process with pid 79055 00:09:44.801 01:09:18 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79055' 00:09:44.801 01:09:18 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 79055 00:09:44.801 01:09:18 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 79055 00:09:45.062 RPC TIMEOUT SETTING TEST PASSED. 00:09:45.062 01:09:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:45.062 00:09:45.062 real 0m2.435s 00:09:45.062 user 0m4.786s 00:09:45.062 sys 0m0.568s 00:09:45.062 01:09:18 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.062 01:09:18 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:45.062 ************************************ 00:09:45.062 END TEST nvme_rpc_timeouts 00:09:45.062 ************************************ 00:09:45.062 01:09:18 -- spdk/autotest.sh@239 -- # uname -s 00:09:45.062 01:09:18 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:45.062 01:09:18 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:45.062 01:09:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.062 01:09:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.062 01:09:18 -- common/autotest_common.sh@10 -- # set +x 00:09:45.322 ************************************ 00:09:45.322 START TEST sw_hotplug 00:09:45.322 ************************************ 00:09:45.322 01:09:18 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:45.323 * Looking for test storage... 00:09:45.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:45.323 01:09:18 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:45.323 01:09:18 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:45.323 01:09:18 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:09:45.323 01:09:18 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.323 01:09:18 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:45.323 01:09:18 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.323 01:09:18 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:45.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.323 --rc genhtml_branch_coverage=1 00:09:45.323 --rc genhtml_function_coverage=1 00:09:45.323 --rc genhtml_legend=1 00:09:45.323 --rc geninfo_all_blocks=1 00:09:45.323 --rc geninfo_unexecuted_blocks=1 00:09:45.323 00:09:45.323 ' 00:09:45.323 01:09:18 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:45.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.323 --rc genhtml_branch_coverage=1 00:09:45.323 --rc genhtml_function_coverage=1 00:09:45.323 --rc genhtml_legend=1 00:09:45.323 --rc geninfo_all_blocks=1 00:09:45.323 --rc geninfo_unexecuted_blocks=1 00:09:45.323 00:09:45.323 ' 00:09:45.323 01:09:18 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:45.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.323 --rc genhtml_branch_coverage=1 00:09:45.323 --rc genhtml_function_coverage=1 00:09:45.323 --rc genhtml_legend=1 00:09:45.323 --rc geninfo_all_blocks=1 00:09:45.323 --rc geninfo_unexecuted_blocks=1 00:09:45.323 00:09:45.323 ' 00:09:45.323 01:09:18 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:45.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.323 --rc genhtml_branch_coverage=1 00:09:45.323 --rc genhtml_function_coverage=1 00:09:45.323 --rc genhtml_legend=1 00:09:45.323 --rc geninfo_all_blocks=1 00:09:45.323 --rc geninfo_unexecuted_blocks=1 00:09:45.323 00:09:45.323 ' 00:09:45.323 01:09:18 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:45.583 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:45.844 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:45.844 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:45.844 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:45.844 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:45.844 01:09:19 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:45.844 01:09:19 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:45.844 01:09:19 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:45.844 01:09:19 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:45.844 01:09:19 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:45.844 01:09:19 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:45.844 01:09:19 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:45.844 01:09:19 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:46.104 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:46.365 Waiting for block devices as requested 00:09:46.365 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:46.365 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:46.626 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:46.626 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.916 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:51.916 01:09:25 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:51.916 01:09:25 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:52.177 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:52.177 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:52.177 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:52.438 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:52.700 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:52.700 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:52.961 01:09:26 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:52.961 01:09:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:52.961 01:09:26 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:52.961 01:09:26 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:52.961 01:09:26 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=79906 00:09:52.961 01:09:26 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:52.961 01:09:26 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:52.961 01:09:26 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:52.961 01:09:26 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:52.961 01:09:26 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:52.961 01:09:26 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:52.961 01:09:26 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:52.961 01:09:26 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:52.961 01:09:26 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:52.961 01:09:26 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:52.961 01:09:26 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:52.961 01:09:26 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:52.961 01:09:26 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:52.961 01:09:26 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:53.222 Initializing NVMe Controllers 00:09:53.222 Attaching to 0000:00:10.0 00:09:53.222 Attaching to 0000:00:11.0 00:09:53.222 Attached to 0000:00:11.0 00:09:53.222 Attached to 0000:00:10.0 00:09:53.222 Initialization complete. Starting I/O... 00:09:53.222 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:53.222 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:53.222 00:09:54.165 QEMU NVMe Ctrl (12341 ): 2465 I/Os completed (+2465) 00:09:54.165 QEMU NVMe Ctrl (12340 ): 2465 I/Os completed (+2465) 00:09:54.165 00:09:55.109 QEMU NVMe Ctrl (12341 ): 5553 I/Os completed (+3088) 00:09:55.109 QEMU NVMe Ctrl (12340 ): 5553 I/Os completed (+3088) 00:09:55.109 00:09:56.052 QEMU NVMe Ctrl (12341 ): 8621 I/Os completed (+3068) 00:09:56.052 QEMU NVMe Ctrl (12340 ): 8621 I/Os completed (+3068) 00:09:56.052 00:09:56.996 QEMU NVMe Ctrl (12341 ): 11782 I/Os completed (+3161) 00:09:56.996 QEMU NVMe Ctrl (12340 ): 11783 I/Os completed (+3162) 00:09:56.996 00:09:58.381 QEMU NVMe Ctrl (12341 ): 15442 I/Os completed (+3660) 00:09:58.381 QEMU NVMe Ctrl (12340 ): 15439 I/Os completed (+3656) 00:09:58.381 00:09:58.986 01:09:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:58.986 01:09:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:58.986 01:09:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:58.986 [2024-12-14 01:09:32.412419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:58.986 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:58.986 [2024-12-14 01:09:32.413362] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.986 [2024-12-14 01:09:32.413404] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.986 [2024-12-14 01:09:32.413415] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.986 [2024-12-14 01:09:32.413428] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.986 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:58.986 [2024-12-14 01:09:32.414300] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.986 [2024-12-14 01:09:32.414344] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.986 [2024-12-14 01:09:32.414356] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.986 [2024-12-14 01:09:32.414368] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.986 01:09:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:58.986 01:09:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:58.986 [2024-12-14 01:09:32.435162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:58.986 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:58.986 [2024-12-14 01:09:32.436183] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.986 [2024-12-14 01:09:32.436248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.986 [2024-12-14 01:09:32.436266] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.986 [2024-12-14 01:09:32.436280] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.986 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:58.986 [2024-12-14 01:09:32.437305] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.986 [2024-12-14 01:09:32.437336] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.986 [2024-12-14 01:09:32.437352] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.986 [2024-12-14 01:09:32.437363] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.986 01:09:32 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:58.986 01:09:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:58.986 01:09:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:58.986 01:09:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:58.986 01:09:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:58.986 00:09:59.248 01:09:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:59.248 01:09:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:59.248 01:09:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:59.248 01:09:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:59.248 01:09:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:59.248 Attaching to 0000:00:10.0 00:09:59.248 Attached to 0000:00:10.0 00:09:59.248 01:09:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:59.248 01:09:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:59.248 01:09:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:59.248 Attaching to 0000:00:11.0 00:09:59.248 Attached to 0000:00:11.0 00:10:00.191 QEMU NVMe Ctrl (12340 ): 3247 I/Os completed (+3247) 00:10:00.191 QEMU NVMe Ctrl (12341 ): 2929 I/Os completed (+2929) 00:10:00.191 00:10:01.133 QEMU NVMe Ctrl (12340 ): 6259 I/Os completed (+3012) 00:10:01.133 QEMU NVMe Ctrl (12341 ): 5941 I/Os completed (+3012) 00:10:01.133 00:10:02.075 QEMU NVMe Ctrl (12340 ): 9407 I/Os completed (+3148) 00:10:02.075 QEMU NVMe Ctrl (12341 ): 9089 I/Os completed (+3148) 00:10:02.075 00:10:03.018 QEMU NVMe Ctrl (12340 ): 12739 I/Os completed (+3332) 00:10:03.018 QEMU NVMe Ctrl (12341 ): 12420 I/Os completed (+3331) 00:10:03.018 00:10:04.401 QEMU NVMe Ctrl (12340 ): 17126 I/Os completed (+4387) 00:10:04.401 QEMU NVMe Ctrl (12341 ): 16784 I/Os completed (+4364) 00:10:04.401 00:10:05.341 QEMU NVMe Ctrl (12340 ): 21457 I/Os completed (+4331) 00:10:05.341 QEMU NVMe Ctrl (12341 ): 21129 I/Os completed (+4345) 00:10:05.341 00:10:06.279 QEMU NVMe Ctrl (12340 ): 25143 I/Os completed (+3686) 00:10:06.279 QEMU NVMe Ctrl (12341 ): 24799 I/Os completed (+3670) 00:10:06.279 00:10:07.219 QEMU NVMe Ctrl (12340 ): 29139 I/Os completed (+3996) 00:10:07.219 QEMU NVMe Ctrl (12341 ): 28852 I/Os completed (+4053) 00:10:07.219 00:10:08.160 QEMU NVMe Ctrl (12340 ): 32107 I/Os completed (+2968) 00:10:08.160 QEMU NVMe Ctrl (12341 ): 31831 I/Os completed (+2979) 00:10:08.160 00:10:09.101 QEMU NVMe Ctrl (12340 ): 35215 I/Os completed (+3108) 00:10:09.101 QEMU NVMe Ctrl (12341 ): 34947 I/Os completed (+3116) 00:10:09.101 00:10:10.040 QEMU NVMe Ctrl (12340 ): 39465 I/Os completed (+4250) 00:10:10.040 QEMU NVMe Ctrl (12341 ): 39204 I/Os completed (+4257) 00:10:10.040 00:10:11.420 QEMU NVMe Ctrl (12340 ): 43730 I/Os completed (+4265) 00:10:11.420 QEMU NVMe Ctrl (12341 ): 43465 I/Os completed (+4261) 00:10:11.420 00:10:11.420 01:09:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:11.420 01:09:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:11.420 01:09:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:11.420 01:09:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:11.420 [2024-12-14 01:09:44.726141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:11.420 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:11.420 [2024-12-14 01:09:44.726909] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.420 [2024-12-14 01:09:44.726942] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.420 [2024-12-14 01:09:44.726955] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.420 [2024-12-14 01:09:44.726971] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.420 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:11.420 [2024-12-14 01:09:44.728098] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.420 [2024-12-14 01:09:44.728189] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.420 [2024-12-14 01:09:44.728215] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.420 [2024-12-14 01:09:44.728264] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.420 01:09:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:11.420 01:09:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:11.420 [2024-12-14 01:09:44.748502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:11.420 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:11.420 [2024-12-14 01:09:44.749263] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.421 [2024-12-14 01:09:44.749377] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.421 [2024-12-14 01:09:44.749407] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.421 [2024-12-14 01:09:44.749464] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.421 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:11.421 [2024-12-14 01:09:44.750342] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.421 [2024-12-14 01:09:44.750383] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.421 [2024-12-14 01:09:44.750409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.421 [2024-12-14 01:09:44.750482] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.421 01:09:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:11.421 01:09:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:11.421 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:11.421 EAL: Scan for (pci) bus failed. 00:10:11.421 01:09:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:11.421 01:09:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:11.421 01:09:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:11.421 01:09:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:11.421 01:09:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:11.421 01:09:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:11.421 01:09:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:11.421 01:09:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:11.421 Attaching to 0000:00:10.0 00:10:11.421 Attached to 0000:00:10.0 00:10:11.421 01:09:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:11.421 01:09:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:11.421 01:09:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:11.421 Attaching to 0000:00:11.0 00:10:11.421 Attached to 0000:00:11.0 00:10:11.989 QEMU NVMe Ctrl (12340 ): 3009 I/Os completed (+3009) 00:10:11.989 QEMU NVMe Ctrl (12341 ): 2583 I/Os completed (+2583) 00:10:11.989 00:10:13.369 QEMU NVMe Ctrl (12340 ): 7264 I/Os completed (+4255) 00:10:13.369 QEMU NVMe Ctrl (12341 ): 6829 I/Os completed (+4246) 00:10:13.369 00:10:14.311 QEMU NVMe Ctrl (12340 ): 11540 I/Os completed (+4276) 00:10:14.311 QEMU NVMe Ctrl (12341 ): 11093 I/Os completed (+4264) 00:10:14.311 00:10:15.254 QEMU NVMe Ctrl (12340 ): 15247 I/Os completed (+3707) 00:10:15.254 QEMU NVMe Ctrl (12341 ): 14813 I/Os completed (+3720) 00:10:15.254 00:10:16.196 QEMU NVMe Ctrl (12340 ): 18583 I/Os completed (+3336) 00:10:16.196 QEMU NVMe Ctrl (12341 ): 18149 I/Os completed (+3336) 00:10:16.196 00:10:17.139 QEMU NVMe Ctrl (12340 ): 21793 I/Os completed (+3210) 00:10:17.139 QEMU NVMe Ctrl (12341 ): 21419 I/Os completed (+3270) 00:10:17.139 00:10:18.081 QEMU NVMe Ctrl (12340 ): 24933 I/Os completed (+3140) 00:10:18.081 QEMU NVMe Ctrl (12341 ): 24563 I/Os completed (+3144) 00:10:18.081 00:10:19.024 QEMU NVMe Ctrl (12340 ): 27929 I/Os completed (+2996) 00:10:19.024 QEMU NVMe Ctrl (12341 ): 27559 I/Os completed (+2996) 00:10:19.024 00:10:20.410 QEMU NVMe Ctrl (12340 ): 30957 I/Os completed (+3028) 00:10:20.410 QEMU NVMe Ctrl (12341 ): 30596 I/Os completed (+3037) 00:10:20.410 00:10:20.983 QEMU NVMe Ctrl (12340 ): 34025 I/Os completed (+3068) 00:10:20.983 QEMU NVMe Ctrl (12341 ): 33669 I/Os completed (+3073) 00:10:20.983 00:10:22.370 QEMU NVMe Ctrl (12340 ): 37118 I/Os completed (+3093) 00:10:22.370 QEMU NVMe Ctrl (12341 ): 36761 I/Os completed (+3092) 00:10:22.370 00:10:23.362 QEMU NVMe Ctrl (12340 ): 40740 I/Os completed (+3622) 00:10:23.362 QEMU NVMe Ctrl (12341 ): 40392 I/Os completed (+3631) 00:10:23.362 00:10:23.623 01:09:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:23.623 01:09:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:23.623 01:09:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:23.623 01:09:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:23.623 [2024-12-14 01:09:56.984600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:23.623 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:23.623 [2024-12-14 01:09:56.985513] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.623 [2024-12-14 01:09:56.985612] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.623 [2024-12-14 01:09:56.985657] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.623 [2024-12-14 01:09:56.985718] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.623 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:23.623 [2024-12-14 01:09:56.986822] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.623 [2024-12-14 01:09:56.986933] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.623 [2024-12-14 01:09:56.986960] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.623 [2024-12-14 01:09:56.987010] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.623 01:09:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:23.623 01:09:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:23.623 [2024-12-14 01:09:56.999137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:23.623 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:23.623 [2024-12-14 01:09:56.999871] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.623 [2024-12-14 01:09:56.999900] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.623 [2024-12-14 01:09:56.999912] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.623 [2024-12-14 01:09:56.999923] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.623 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:23.623 [2024-12-14 01:09:57.000742] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.623 [2024-12-14 01:09:57.000769] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.623 [2024-12-14 01:09:57.000781] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.624 [2024-12-14 01:09:57.000792] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.624 01:09:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:23.624 01:09:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:23.624 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:23.624 EAL: Scan for (pci) bus failed. 00:10:23.624 01:09:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:23.624 01:09:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:23.624 01:09:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:23.624 01:09:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:23.624 01:09:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:23.624 01:09:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:23.624 01:09:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:23.624 01:09:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:23.624 Attaching to 0000:00:10.0 00:10:23.624 Attached to 0000:00:10.0 00:10:23.884 01:09:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:23.884 01:09:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:23.884 01:09:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:23.884 Attaching to 0000:00:11.0 00:10:23.884 Attached to 0000:00:11.0 00:10:23.884 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:23.884 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:23.884 [2024-12-14 01:09:57.256520] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:36.120 01:10:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:36.120 01:10:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:36.120 01:10:09 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.85 00:10:36.120 01:10:09 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.85 00:10:36.120 01:10:09 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:36.120 01:10:09 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.85 00:10:36.120 01:10:09 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.85 2 00:10:36.120 remove_attach_helper took 42.85s to complete (handling 2 nvme drive(s)) 01:10:09 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:42.708 01:10:15 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 79906 00:10:42.708 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (79906) - No such process 00:10:42.708 01:10:15 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 79906 00:10:42.708 01:10:15 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:42.708 01:10:15 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:42.708 01:10:15 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:42.708 01:10:15 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=80448 00:10:42.708 01:10:15 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:42.708 01:10:15 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:42.708 01:10:15 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 80448 00:10:42.708 01:10:15 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 80448 ']' 00:10:42.708 01:10:15 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.708 01:10:15 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.708 01:10:15 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.708 01:10:15 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.708 01:10:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:42.708 [2024-12-14 01:10:15.350347] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:42.708 [2024-12-14 01:10:15.350842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80448 ] 00:10:42.708 [2024-12-14 01:10:15.498575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.708 [2024-12-14 01:10:15.528577] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.708 01:10:16 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.708 01:10:16 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:42.708 01:10:16 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:42.708 01:10:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.708 01:10:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:42.708 01:10:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.708 01:10:16 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:42.708 01:10:16 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:42.708 01:10:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:42.708 01:10:16 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:42.708 01:10:16 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:42.708 01:10:16 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:42.708 01:10:16 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:42.708 01:10:16 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:42.708 01:10:16 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:42.708 01:10:16 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:42.708 01:10:16 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:42.708 01:10:16 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:42.708 01:10:16 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:49.298 01:10:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:49.298 01:10:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:49.298 01:10:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:49.298 01:10:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:49.298 01:10:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:49.298 01:10:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:49.298 01:10:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:49.298 01:10:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:49.298 01:10:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:49.298 01:10:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:49.298 01:10:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:49.298 01:10:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.298 01:10:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:49.298 01:10:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.298 01:10:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:49.298 01:10:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:49.298 [2024-12-14 01:10:22.315219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:49.298 [2024-12-14 01:10:22.316261] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.298 [2024-12-14 01:10:22.316295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.298 [2024-12-14 01:10:22.316307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.298 [2024-12-14 01:10:22.316319] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.298 [2024-12-14 01:10:22.316328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.298 [2024-12-14 01:10:22.316335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.298 [2024-12-14 01:10:22.316345] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.299 [2024-12-14 01:10:22.316351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.299 [2024-12-14 01:10:22.316359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.299 [2024-12-14 01:10:22.316366] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.299 [2024-12-14 01:10:22.316373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.299 [2024-12-14 01:10:22.316380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.299 01:10:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:49.299 01:10:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:49.299 01:10:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:49.299 01:10:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:49.299 01:10:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:49.299 01:10:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:49.299 01:10:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.299 01:10:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:49.299 [2024-12-14 01:10:22.815478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:49.299 [2024-12-14 01:10:22.816481] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.299 [2024-12-14 01:10:22.816513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.299 [2024-12-14 01:10:22.816522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.299 [2024-12-14 01:10:22.816534] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.299 [2024-12-14 01:10:22.816541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.299 [2024-12-14 01:10:22.816549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.299 [2024-12-14 01:10:22.816556] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.299 [2024-12-14 01:10:22.816564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.299 [2024-12-14 01:10:22.816570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.299 [2024-12-14 01:10:22.816580] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.299 [2024-12-14 01:10:22.816586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.299 [2024-12-14 01:10:22.816594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.299 01:10:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.299 01:10:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:49.299 01:10:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:49.868 01:10:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:49.868 01:10:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:49.868 01:10:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:49.868 01:10:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:49.868 01:10:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:49.868 01:10:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:49.868 01:10:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.868 01:10:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:49.868 01:10:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.868 01:10:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:49.868 01:10:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:49.868 01:10:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:49.868 01:10:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:49.868 01:10:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:50.128 01:10:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:50.128 01:10:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:50.128 01:10:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:50.128 01:10:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:50.128 01:10:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:50.128 01:10:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:50.128 01:10:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:50.128 01:10:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:02.362 01:10:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:02.362 01:10:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:02.362 01:10:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:02.362 01:10:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:02.362 01:10:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:02.362 01:10:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.362 01:10:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.362 01:10:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.362 01:10:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.362 01:10:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:02.362 01:10:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:02.362 01:10:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:02.362 01:10:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:02.362 01:10:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:02.362 01:10:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:02.362 01:10:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:02.362 01:10:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:02.362 01:10:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:02.362 01:10:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:02.362 01:10:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:02.362 01:10:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.362 01:10:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.362 01:10:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.362 01:10:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.362 01:10:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:02.362 01:10:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:02.362 [2024-12-14 01:10:35.715670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:02.362 [2024-12-14 01:10:35.716691] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.362 [2024-12-14 01:10:35.716722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.362 [2024-12-14 01:10:35.716734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.362 [2024-12-14 01:10:35.716747] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.362 [2024-12-14 01:10:35.716755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.362 [2024-12-14 01:10:35.716762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.362 [2024-12-14 01:10:35.716770] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.362 [2024-12-14 01:10:35.716776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.362 [2024-12-14 01:10:35.716784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.362 [2024-12-14 01:10:35.716790] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.362 [2024-12-14 01:10:35.716799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.362 [2024-12-14 01:10:35.716805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.623 [2024-12-14 01:10:36.115675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:02.623 [2024-12-14 01:10:36.116690] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.623 [2024-12-14 01:10:36.116720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.623 [2024-12-14 01:10:36.116729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.623 [2024-12-14 01:10:36.116741] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.623 [2024-12-14 01:10:36.116748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.623 [2024-12-14 01:10:36.116756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.623 [2024-12-14 01:10:36.116763] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.623 [2024-12-14 01:10:36.116770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.623 [2024-12-14 01:10:36.116777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.623 [2024-12-14 01:10:36.116785] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.623 [2024-12-14 01:10:36.116791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.623 [2024-12-14 01:10:36.116800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.623 01:10:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:02.623 01:10:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:02.623 01:10:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:02.623 01:10:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:02.623 01:10:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.623 01:10:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:02.623 01:10:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.623 01:10:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.623 01:10:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.884 01:10:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:02.884 01:10:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:02.884 01:10:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:02.884 01:10:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:02.884 01:10:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:02.884 01:10:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:02.884 01:10:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:02.884 01:10:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:02.884 01:10:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:02.884 01:10:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:02.884 01:10:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:02.884 01:10:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:02.884 01:10:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:15.119 01:10:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:15.119 01:10:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:15.119 01:10:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:15.119 01:10:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:15.119 01:10:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:15.119 01:10:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:15.119 01:10:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.119 01:10:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:15.119 01:10:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.119 01:10:48 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:15.119 01:10:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:15.119 01:10:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:15.119 01:10:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:15.119 [2024-12-14 01:10:48.515889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:15.119 [2024-12-14 01:10:48.517198] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.119 [2024-12-14 01:10:48.517252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.119 [2024-12-14 01:10:48.517288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.119 [2024-12-14 01:10:48.517317] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.119 [2024-12-14 01:10:48.517335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.119 [2024-12-14 01:10:48.517358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.119 [2024-12-14 01:10:48.517382] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.119 [2024-12-14 01:10:48.517398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.119 [2024-12-14 01:10:48.517467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.119 [2024-12-14 01:10:48.517492] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.119 [2024-12-14 01:10:48.517509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.119 [2024-12-14 01:10:48.517543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.119 01:10:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:15.119 01:10:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:15.119 01:10:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:15.119 01:10:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:15.119 01:10:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:15.119 01:10:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:15.119 01:10:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:15.119 01:10:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:15.119 01:10:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.119 01:10:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:15.119 01:10:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.119 01:10:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:15.119 01:10:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:15.411 [2024-12-14 01:10:48.915886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:15.411 [2024-12-14 01:10:48.916887] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.411 [2024-12-14 01:10:48.916918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.411 [2024-12-14 01:10:48.916928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.411 [2024-12-14 01:10:48.916940] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.411 [2024-12-14 01:10:48.916948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.411 [2024-12-14 01:10:48.916958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.411 [2024-12-14 01:10:48.916964] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.411 [2024-12-14 01:10:48.916972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.411 [2024-12-14 01:10:48.916979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.411 [2024-12-14 01:10:48.916988] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.411 [2024-12-14 01:10:48.916994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.411 [2024-12-14 01:10:48.917002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.672 01:10:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:15.672 01:10:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:15.672 01:10:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:15.672 01:10:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:15.672 01:10:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:15.672 01:10:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:15.672 01:10:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.672 01:10:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:15.672 01:10:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.672 01:10:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:15.672 01:10:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:15.672 01:10:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:15.672 01:10:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:15.672 01:10:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:15.672 01:10:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:15.672 01:10:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:15.672 01:10:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:15.672 01:10:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:15.672 01:10:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:15.933 01:10:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:15.933 01:10:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:15.933 01:10:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:28.169 01:11:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:28.169 01:11:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:28.169 01:11:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:28.169 01:11:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:28.169 01:11:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:28.169 01:11:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:28.169 01:11:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.169 01:11:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.169 01:11:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.169 01:11:01 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:28.169 01:11:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:28.169 01:11:01 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.17 00:11:28.169 01:11:01 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.17 00:11:28.169 01:11:01 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:28.169 01:11:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.17 00:11:28.169 01:11:01 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.17 2 00:11:28.169 remove_attach_helper took 45.17s to complete (handling 2 nvme drive(s)) 01:11:01 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:28.169 01:11:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.169 01:11:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.169 01:11:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.169 01:11:01 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:28.169 01:11:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.169 01:11:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.169 01:11:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.169 01:11:01 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:28.169 01:11:01 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:28.169 01:11:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:28.169 01:11:01 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:28.169 01:11:01 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:28.169 01:11:01 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:28.169 01:11:01 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:28.169 01:11:01 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:28.169 01:11:01 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:28.169 01:11:01 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:28.169 01:11:01 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:28.169 01:11:01 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:28.169 01:11:01 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:34.755 01:11:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:34.755 01:11:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:34.755 01:11:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:34.755 01:11:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:34.755 01:11:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:34.755 01:11:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:34.755 01:11:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:34.755 01:11:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:34.755 01:11:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:34.755 01:11:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:34.755 01:11:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.755 01:11:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:34.755 01:11:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:34.755 01:11:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.755 01:11:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:34.755 01:11:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:34.755 [2024-12-14 01:11:07.508178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:34.755 [2024-12-14 01:11:07.508958] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.755 [2024-12-14 01:11:07.508988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.755 [2024-12-14 01:11:07.509000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.755 [2024-12-14 01:11:07.509012] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.755 [2024-12-14 01:11:07.509020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.755 [2024-12-14 01:11:07.509027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.755 [2024-12-14 01:11:07.509035] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.755 [2024-12-14 01:11:07.509042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.755 [2024-12-14 01:11:07.509051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.755 [2024-12-14 01:11:07.509057] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.755 [2024-12-14 01:11:07.509065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.755 [2024-12-14 01:11:07.509071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.755 [2024-12-14 01:11:07.908182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:34.755 [2024-12-14 01:11:07.908903] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.755 [2024-12-14 01:11:07.908932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.755 [2024-12-14 01:11:07.908942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.755 [2024-12-14 01:11:07.908953] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.755 [2024-12-14 01:11:07.908960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.755 [2024-12-14 01:11:07.908968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.755 [2024-12-14 01:11:07.908975] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.755 [2024-12-14 01:11:07.908982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.756 [2024-12-14 01:11:07.908989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.756 [2024-12-14 01:11:07.908996] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.756 [2024-12-14 01:11:07.909002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.756 [2024-12-14 01:11:07.909012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.756 01:11:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:34.756 01:11:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:34.756 01:11:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:34.756 01:11:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:34.756 01:11:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:34.756 01:11:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.756 01:11:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:34.756 01:11:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:34.756 01:11:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.756 01:11:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:34.756 01:11:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:34.756 01:11:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:34.756 01:11:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:34.756 01:11:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:34.756 01:11:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:34.756 01:11:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:34.756 01:11:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:34.756 01:11:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:34.756 01:11:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:34.756 01:11:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:34.756 01:11:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:34.756 01:11:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:46.988 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:46.988 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:46.988 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:46.988 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:46.988 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:46.988 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:46.988 01:11:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.988 01:11:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.988 01:11:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.988 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:46.988 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:46.988 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:46.988 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:46.988 [2024-12-14 01:11:20.308379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:46.988 [2024-12-14 01:11:20.309249] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.988 [2024-12-14 01:11:20.309279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.988 [2024-12-14 01:11:20.309291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.988 [2024-12-14 01:11:20.309303] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.988 [2024-12-14 01:11:20.309312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.988 [2024-12-14 01:11:20.309319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.988 [2024-12-14 01:11:20.309326] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.988 [2024-12-14 01:11:20.309333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.988 [2024-12-14 01:11:20.309341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.988 [2024-12-14 01:11:20.309348] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.989 [2024-12-14 01:11:20.309355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.989 [2024-12-14 01:11:20.309362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.989 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:46.989 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:46.989 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:46.989 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:46.989 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:46.989 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:46.989 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:46.989 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:46.989 01:11:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.989 01:11:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.989 01:11:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.989 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:46.989 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:47.250 [2024-12-14 01:11:20.708381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:47.250 [2024-12-14 01:11:20.709120] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.250 [2024-12-14 01:11:20.709151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.250 [2024-12-14 01:11:20.709160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.250 [2024-12-14 01:11:20.709172] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.250 [2024-12-14 01:11:20.709180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.250 [2024-12-14 01:11:20.709188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.250 [2024-12-14 01:11:20.709195] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.250 [2024-12-14 01:11:20.709202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.250 [2024-12-14 01:11:20.709209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.250 [2024-12-14 01:11:20.709217] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.250 [2024-12-14 01:11:20.709223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.250 [2024-12-14 01:11:20.709232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.511 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:47.511 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:47.511 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:47.511 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:47.511 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:47.511 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:47.511 01:11:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.511 01:11:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.511 01:11:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.511 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:47.511 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:47.511 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:47.511 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:47.511 01:11:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:47.511 01:11:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:47.511 01:11:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:47.511 01:11:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:47.511 01:11:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:47.511 01:11:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:47.511 01:11:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:47.772 01:11:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:47.772 01:11:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:00.004 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:00.004 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:00.004 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:00.004 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.004 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.004 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.004 01:11:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.004 01:11:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.004 01:11:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.004 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:00.004 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:00.004 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.004 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.004 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.004 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.004 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:00.004 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:00.004 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:00.004 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.004 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.004 [2024-12-14 01:11:33.208573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:00.004 01:11:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.004 01:11:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.004 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.004 [2024-12-14 01:11:33.209351] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.004 [2024-12-14 01:11:33.209374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.004 [2024-12-14 01:11:33.209386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.004 [2024-12-14 01:11:33.209398] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.004 [2024-12-14 01:11:33.209409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.004 [2024-12-14 01:11:33.209416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.005 [2024-12-14 01:11:33.209424] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.005 [2024-12-14 01:11:33.209430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.005 [2024-12-14 01:11:33.209438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.005 [2024-12-14 01:11:33.209444] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.005 [2024-12-14 01:11:33.209452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.005 [2024-12-14 01:11:33.209459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.005 01:11:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.005 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:00.005 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:00.005 [2024-12-14 01:11:33.608578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:00.005 [2024-12-14 01:11:33.609297] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.005 [2024-12-14 01:11:33.609329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.005 [2024-12-14 01:11:33.609338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.005 [2024-12-14 01:11:33.609350] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.005 [2024-12-14 01:11:33.609357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.005 [2024-12-14 01:11:33.609365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.005 [2024-12-14 01:11:33.609371] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.005 [2024-12-14 01:11:33.609382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.005 [2024-12-14 01:11:33.609388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.005 [2024-12-14 01:11:33.609396] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.005 [2024-12-14 01:11:33.609402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.005 [2024-12-14 01:11:33.609410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.267 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:00.267 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:00.267 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:00.267 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.267 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.267 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.267 01:11:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.267 01:11:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.267 01:11:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.267 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:00.267 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:00.267 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:00.267 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:00.267 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:00.528 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:00.528 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:00.528 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:00.528 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:00.528 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:00.528 01:11:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:00.528 01:11:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:00.528 01:11:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:12.764 01:11:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:12.764 01:11:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:12.764 01:11:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:12.764 01:11:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:12.764 01:11:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:12.764 01:11:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:12.764 01:11:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.764 01:11:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:12.764 01:11:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.764 01:11:46 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:12.764 01:11:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:12.764 01:11:46 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.63 00:12:12.764 01:11:46 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.63 00:12:12.764 01:11:46 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:12.764 01:11:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.63 00:12:12.764 01:11:46 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.63 2 00:12:12.764 remove_attach_helper took 44.63s to complete (handling 2 nvme drive(s)) 01:11:46 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:12.764 01:11:46 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 80448 00:12:12.764 01:11:46 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 80448 ']' 00:12:12.764 01:11:46 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 80448 00:12:12.764 01:11:46 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:12.764 01:11:46 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.764 01:11:46 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80448 00:12:12.764 01:11:46 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:12.764 01:11:46 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:12.765 killing process with pid 80448 00:12:12.765 01:11:46 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80448' 00:12:12.765 01:11:46 sw_hotplug -- common/autotest_common.sh@973 -- # kill 80448 00:12:12.765 01:11:46 sw_hotplug -- common/autotest_common.sh@978 -- # wait 80448 00:12:12.765 01:11:46 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:13.025 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:13.598 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:13.598 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:13.598 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:13.598 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:13.598 00:12:13.598 real 2m28.496s 00:12:13.598 user 1m48.736s 00:12:13.598 sys 0m18.209s 00:12:13.598 ************************************ 00:12:13.598 END TEST sw_hotplug 00:12:13.598 01:11:47 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.598 01:11:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:13.598 ************************************ 00:12:13.865 01:11:47 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:13.865 01:11:47 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:13.865 01:11:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:13.865 01:11:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.865 01:11:47 -- common/autotest_common.sh@10 -- # set +x 00:12:13.865 ************************************ 00:12:13.865 START TEST nvme_xnvme 00:12:13.865 ************************************ 00:12:13.865 01:11:47 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:13.865 * Looking for test storage... 00:12:13.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:13.865 01:11:47 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:13.865 01:11:47 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:13.865 01:11:47 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:13.865 01:11:47 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:13.865 01:11:47 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:13.865 01:11:47 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.865 01:11:47 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:13.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.865 --rc genhtml_branch_coverage=1 00:12:13.865 --rc genhtml_function_coverage=1 00:12:13.865 --rc genhtml_legend=1 00:12:13.865 --rc geninfo_all_blocks=1 00:12:13.865 --rc geninfo_unexecuted_blocks=1 00:12:13.865 00:12:13.865 ' 00:12:13.865 01:11:47 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:13.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.865 --rc genhtml_branch_coverage=1 00:12:13.865 --rc genhtml_function_coverage=1 00:12:13.865 --rc genhtml_legend=1 00:12:13.865 --rc geninfo_all_blocks=1 00:12:13.865 --rc geninfo_unexecuted_blocks=1 00:12:13.865 00:12:13.865 ' 00:12:13.865 01:11:47 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:13.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.865 --rc genhtml_branch_coverage=1 00:12:13.865 --rc genhtml_function_coverage=1 00:12:13.865 --rc genhtml_legend=1 00:12:13.865 --rc geninfo_all_blocks=1 00:12:13.865 --rc geninfo_unexecuted_blocks=1 00:12:13.865 00:12:13.865 ' 00:12:13.865 01:11:47 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:13.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.865 --rc genhtml_branch_coverage=1 00:12:13.865 --rc genhtml_function_coverage=1 00:12:13.865 --rc genhtml_legend=1 00:12:13.865 --rc geninfo_all_blocks=1 00:12:13.865 --rc geninfo_unexecuted_blocks=1 00:12:13.865 00:12:13.865 ' 00:12:13.865 01:11:47 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:13.865 01:11:47 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:13.865 01:11:47 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:13.865 01:11:47 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:13.865 01:11:47 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:13.865 01:11:47 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:13.865 01:11:47 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:13.865 01:11:47 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:13.865 01:11:47 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:13.865 01:11:47 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:13.865 01:11:47 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:13.866 01:11:47 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:13.866 01:11:47 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:13.866 01:11:47 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:13.866 01:11:47 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:13.866 01:11:47 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:13.866 01:11:47 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:13.866 01:11:47 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:13.866 01:11:47 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:13.866 01:11:47 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:13.866 01:11:47 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:13.866 01:11:47 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:13.866 01:11:47 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:13.866 01:11:47 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:13.866 01:11:47 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:13.866 01:11:47 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:13.866 01:11:47 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:13.866 01:11:47 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:13.866 #define SPDK_CONFIG_H 00:12:13.866 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:13.866 #define SPDK_CONFIG_APPS 1 00:12:13.866 #define SPDK_CONFIG_ARCH native 00:12:13.866 #define SPDK_CONFIG_ASAN 1 00:12:13.866 #undef SPDK_CONFIG_AVAHI 00:12:13.866 #undef SPDK_CONFIG_CET 00:12:13.866 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:13.866 #define SPDK_CONFIG_COVERAGE 1 00:12:13.866 #define SPDK_CONFIG_CROSS_PREFIX 00:12:13.866 #undef SPDK_CONFIG_CRYPTO 00:12:13.866 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:13.866 #undef SPDK_CONFIG_CUSTOMOCF 00:12:13.866 #undef SPDK_CONFIG_DAOS 00:12:13.866 #define SPDK_CONFIG_DAOS_DIR 00:12:13.866 #define SPDK_CONFIG_DEBUG 1 00:12:13.866 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:13.866 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:12:13.866 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:12:13.866 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:12:13.866 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:13.866 #undef SPDK_CONFIG_DPDK_UADK 00:12:13.866 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:13.866 #define SPDK_CONFIG_EXAMPLES 1 00:12:13.866 #undef SPDK_CONFIG_FC 00:12:13.866 #define SPDK_CONFIG_FC_PATH 00:12:13.866 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:13.866 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:13.866 #define SPDK_CONFIG_FSDEV 1 00:12:13.866 #undef SPDK_CONFIG_FUSE 00:12:13.866 #undef SPDK_CONFIG_FUZZER 00:12:13.866 #define SPDK_CONFIG_FUZZER_LIB 00:12:13.866 #undef SPDK_CONFIG_GOLANG 00:12:13.866 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:13.866 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:13.866 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:13.866 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:13.866 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:13.866 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:13.866 #undef SPDK_CONFIG_HAVE_LZ4 00:12:13.866 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:13.866 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:13.866 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:13.866 #define SPDK_CONFIG_IDXD 1 00:12:13.866 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:13.866 #undef SPDK_CONFIG_IPSEC_MB 00:12:13.866 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:13.866 #define SPDK_CONFIG_ISAL 1 00:12:13.866 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:13.866 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:13.866 #define SPDK_CONFIG_LIBDIR 00:12:13.866 #undef SPDK_CONFIG_LTO 00:12:13.866 #define SPDK_CONFIG_MAX_LCORES 128 00:12:13.866 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:13.866 #define SPDK_CONFIG_NVME_CUSE 1 00:12:13.866 #undef SPDK_CONFIG_OCF 00:12:13.866 #define SPDK_CONFIG_OCF_PATH 00:12:13.866 #define SPDK_CONFIG_OPENSSL_PATH 00:12:13.866 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:13.866 #define SPDK_CONFIG_PGO_DIR 00:12:13.866 #undef SPDK_CONFIG_PGO_USE 00:12:13.866 #define SPDK_CONFIG_PREFIX /usr/local 00:12:13.866 #undef SPDK_CONFIG_RAID5F 00:12:13.866 #undef SPDK_CONFIG_RBD 00:12:13.866 #define SPDK_CONFIG_RDMA 1 00:12:13.866 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:13.866 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:13.866 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:13.866 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:13.866 #define SPDK_CONFIG_SHARED 1 00:12:13.866 #undef SPDK_CONFIG_SMA 00:12:13.866 #define SPDK_CONFIG_TESTS 1 00:12:13.866 #undef SPDK_CONFIG_TSAN 00:12:13.866 #define SPDK_CONFIG_UBLK 1 00:12:13.866 #define SPDK_CONFIG_UBSAN 1 00:12:13.866 #undef SPDK_CONFIG_UNIT_TESTS 00:12:13.866 #undef SPDK_CONFIG_URING 00:12:13.866 #define SPDK_CONFIG_URING_PATH 00:12:13.866 #undef SPDK_CONFIG_URING_ZNS 00:12:13.866 #undef SPDK_CONFIG_USDT 00:12:13.866 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:13.866 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:13.866 #undef SPDK_CONFIG_VFIO_USER 00:12:13.866 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:13.866 #define SPDK_CONFIG_VHOST 1 00:12:13.866 #define SPDK_CONFIG_VIRTIO 1 00:12:13.866 #undef SPDK_CONFIG_VTUNE 00:12:13.866 #define SPDK_CONFIG_VTUNE_DIR 00:12:13.866 #define SPDK_CONFIG_WERROR 1 00:12:13.866 #define SPDK_CONFIG_WPDK_DIR 00:12:13.866 #define SPDK_CONFIG_XNVME 1 00:12:13.866 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:13.866 01:11:47 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:13.866 01:11:47 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:13.866 01:11:47 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:13.866 01:11:47 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.866 01:11:47 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.866 01:11:47 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.866 01:11:47 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.866 01:11:47 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.867 01:11:47 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.867 01:11:47 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:13.867 01:11:47 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:13.867 01:11:47 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@140 -- # : v22.11.4 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:13.867 01:11:47 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 81788 ]] 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 81788 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.ASuNw1 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.ASuNw1/tests/xnvme /tmp/spdk.ASuNw1 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:13.868 01:11:47 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13364097024 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6218764288 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261964800 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:14.149 01:11:47 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13364097024 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6218764288 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98497073152 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=1205706752 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:14.150 * Looking for test storage... 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13364097024 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:14.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:14.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.150 --rc genhtml_branch_coverage=1 00:12:14.150 --rc genhtml_function_coverage=1 00:12:14.150 --rc genhtml_legend=1 00:12:14.150 --rc geninfo_all_blocks=1 00:12:14.150 --rc geninfo_unexecuted_blocks=1 00:12:14.150 00:12:14.150 ' 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:14.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.150 --rc genhtml_branch_coverage=1 00:12:14.150 --rc genhtml_function_coverage=1 00:12:14.150 --rc genhtml_legend=1 00:12:14.150 --rc geninfo_all_blocks=1 00:12:14.150 --rc geninfo_unexecuted_blocks=1 00:12:14.150 00:12:14.150 ' 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:14.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.150 --rc genhtml_branch_coverage=1 00:12:14.150 --rc genhtml_function_coverage=1 00:12:14.150 --rc genhtml_legend=1 00:12:14.150 --rc geninfo_all_blocks=1 00:12:14.150 --rc geninfo_unexecuted_blocks=1 00:12:14.150 00:12:14.150 ' 00:12:14.150 01:11:47 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:14.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.150 --rc genhtml_branch_coverage=1 00:12:14.150 --rc genhtml_function_coverage=1 00:12:14.150 --rc genhtml_legend=1 00:12:14.150 --rc geninfo_all_blocks=1 00:12:14.150 --rc geninfo_unexecuted_blocks=1 00:12:14.150 00:12:14.150 ' 00:12:14.150 01:11:47 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.150 01:11:47 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.150 01:11:47 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.150 01:11:47 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.150 01:11:47 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.150 01:11:47 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:14.150 01:11:47 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.150 01:11:47 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:14.150 01:11:47 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:14.150 01:11:47 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:14.150 01:11:47 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:14.150 01:11:47 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:14.150 01:11:47 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:14.150 01:11:47 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:14.150 01:11:47 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:14.150 01:11:47 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:14.150 01:11:47 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:14.150 01:11:47 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:14.150 01:11:47 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:14.150 01:11:47 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:14.150 01:11:47 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:14.150 01:11:47 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:14.150 01:11:47 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:14.150 01:11:47 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:14.150 01:11:47 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:14.150 01:11:47 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:14.150 01:11:47 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:14.150 01:11:47 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:14.150 01:11:47 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:14.435 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:14.700 Waiting for block devices as requested 00:12:14.700 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:14.700 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:14.700 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:14.961 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:20.252 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:20.252 01:11:53 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:20.252 01:11:53 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:20.252 01:11:53 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:20.513 01:11:53 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:20.513 01:11:53 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:20.513 01:11:53 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:20.513 01:11:53 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:20.513 01:11:53 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:20.513 No valid GPT data, bailing 00:12:20.513 01:11:53 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:20.513 01:11:54 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:20.513 01:11:54 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:20.513 01:11:54 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:20.513 01:11:54 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:20.513 01:11:54 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:20.513 01:11:54 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:20.513 01:11:54 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:20.513 01:11:54 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:20.513 01:11:54 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:20.513 01:11:54 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:20.513 01:11:54 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:20.513 01:11:54 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:20.513 01:11:54 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:20.513 01:11:54 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:20.513 01:11:54 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:20.513 01:11:54 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:20.513 01:11:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:20.513 01:11:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.513 01:11:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:20.513 ************************************ 00:12:20.513 START TEST xnvme_rpc 00:12:20.513 ************************************ 00:12:20.513 01:11:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:20.513 01:11:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:20.513 01:11:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:20.513 01:11:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:20.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.513 01:11:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:20.513 01:11:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82180 00:12:20.513 01:11:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82180 00:12:20.513 01:11:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82180 ']' 00:12:20.513 01:11:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.513 01:11:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.514 01:11:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.514 01:11:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.514 01:11:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.514 01:11:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:20.514 [2024-12-14 01:11:54.115511] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:20.514 [2024-12-14 01:11:54.115723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82180 ] 00:12:20.775 [2024-12-14 01:11:54.263505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.775 [2024-12-14 01:11:54.292344] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.719 01:11:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.719 01:11:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:21.719 01:11:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:21.719 01:11:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.719 01:11:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.719 xnvme_bdev 00:12:21.719 01:11:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.719 01:11:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:21.719 01:11:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:21.719 01:11:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.719 01:11:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:21.719 01:11:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82180 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82180 ']' 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82180 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82180 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:21.719 killing process with pid 82180 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82180' 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82180 00:12:21.719 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82180 00:12:21.982 ************************************ 00:12:21.982 END TEST xnvme_rpc 00:12:21.982 ************************************ 00:12:21.982 00:12:21.982 real 0m1.428s 00:12:21.982 user 0m1.489s 00:12:21.982 sys 0m0.411s 00:12:21.982 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.982 01:11:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.982 01:11:55 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:21.982 01:11:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:21.982 01:11:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.982 01:11:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.982 ************************************ 00:12:21.982 START TEST xnvme_bdevperf 00:12:21.982 ************************************ 00:12:21.982 01:11:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:21.982 01:11:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:21.982 01:11:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:21.982 01:11:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:21.982 01:11:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:21.982 01:11:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:21.982 01:11:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:21.982 01:11:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:21.982 { 00:12:21.982 "subsystems": [ 00:12:21.982 { 00:12:21.982 "subsystem": "bdev", 00:12:21.982 "config": [ 00:12:21.982 { 00:12:21.982 "params": { 00:12:21.982 "io_mechanism": "libaio", 00:12:21.982 "conserve_cpu": false, 00:12:21.982 "filename": "/dev/nvme0n1", 00:12:21.982 "name": "xnvme_bdev" 00:12:21.982 }, 00:12:21.982 "method": "bdev_xnvme_create" 00:12:21.982 }, 00:12:21.982 { 00:12:21.982 "method": "bdev_wait_for_examine" 00:12:21.982 } 00:12:21.982 ] 00:12:21.982 } 00:12:21.982 ] 00:12:21.982 } 00:12:22.243 [2024-12-14 01:11:55.594113] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:22.243 [2024-12-14 01:11:55.594248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82238 ] 00:12:22.243 [2024-12-14 01:11:55.744644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.243 [2024-12-14 01:11:55.773849] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.504 Running I/O for 5 seconds... 00:12:24.390 25669.00 IOPS, 100.27 MiB/s [2024-12-14T01:11:58.944Z] 25263.50 IOPS, 98.69 MiB/s [2024-12-14T01:12:00.330Z] 25205.67 IOPS, 98.46 MiB/s [2024-12-14T01:12:01.274Z] 25010.00 IOPS, 97.70 MiB/s 00:12:27.662 Latency(us) 00:12:27.662 [2024-12-14T01:12:01.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:27.662 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:27.662 xnvme_bdev : 5.00 24580.54 96.02 0.00 0.00 2598.31 456.86 8570.09 00:12:27.662 [2024-12-14T01:12:01.274Z] =================================================================================================================== 00:12:27.662 [2024-12-14T01:12:01.274Z] Total : 24580.54 96.02 0.00 0.00 2598.31 456.86 8570.09 00:12:27.662 01:12:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:27.662 01:12:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:27.662 01:12:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:27.662 01:12:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:27.662 01:12:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:27.662 { 00:12:27.662 "subsystems": [ 00:12:27.662 { 00:12:27.662 "subsystem": "bdev", 00:12:27.662 "config": [ 00:12:27.662 { 00:12:27.662 "params": { 00:12:27.662 "io_mechanism": "libaio", 00:12:27.662 "conserve_cpu": false, 00:12:27.662 "filename": "/dev/nvme0n1", 00:12:27.662 "name": "xnvme_bdev" 00:12:27.662 }, 00:12:27.662 "method": "bdev_xnvme_create" 00:12:27.662 }, 00:12:27.662 { 00:12:27.662 "method": "bdev_wait_for_examine" 00:12:27.662 } 00:12:27.662 ] 00:12:27.662 } 00:12:27.662 ] 00:12:27.662 } 00:12:27.662 [2024-12-14 01:12:01.166198] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:27.662 [2024-12-14 01:12:01.166328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82303 ] 00:12:27.924 [2024-12-14 01:12:01.314682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.924 [2024-12-14 01:12:01.343516] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.924 Running I/O for 5 seconds... 00:12:30.251 31669.00 IOPS, 123.71 MiB/s [2024-12-14T01:12:04.805Z] 32702.50 IOPS, 127.74 MiB/s [2024-12-14T01:12:05.744Z] 32536.67 IOPS, 127.10 MiB/s [2024-12-14T01:12:06.684Z] 32390.50 IOPS, 126.53 MiB/s 00:12:33.072 Latency(us) 00:12:33.072 [2024-12-14T01:12:06.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.072 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:33.072 xnvme_bdev : 5.00 33448.32 130.66 0.00 0.00 1908.75 187.47 6503.19 00:12:33.072 [2024-12-14T01:12:06.684Z] =================================================================================================================== 00:12:33.072 [2024-12-14T01:12:06.684Z] Total : 33448.32 130.66 0.00 0.00 1908.75 187.47 6503.19 00:12:33.072 00:12:33.072 real 0m11.115s 00:12:33.072 user 0m3.478s 00:12:33.072 sys 0m6.251s 00:12:33.072 01:12:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.072 01:12:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:33.072 ************************************ 00:12:33.072 END TEST xnvme_bdevperf 00:12:33.072 ************************************ 00:12:33.334 01:12:06 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:33.334 01:12:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:33.334 01:12:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.334 01:12:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:33.334 ************************************ 00:12:33.334 START TEST xnvme_fio_plugin 00:12:33.334 ************************************ 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:33.334 01:12:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:33.334 { 00:12:33.334 "subsystems": [ 00:12:33.334 { 00:12:33.334 "subsystem": "bdev", 00:12:33.334 "config": [ 00:12:33.334 { 00:12:33.334 "params": { 00:12:33.334 "io_mechanism": "libaio", 00:12:33.334 "conserve_cpu": false, 00:12:33.334 "filename": "/dev/nvme0n1", 00:12:33.334 "name": "xnvme_bdev" 00:12:33.334 }, 00:12:33.334 "method": "bdev_xnvme_create" 00:12:33.334 }, 00:12:33.334 { 00:12:33.334 "method": "bdev_wait_for_examine" 00:12:33.334 } 00:12:33.334 ] 00:12:33.334 } 00:12:33.334 ] 00:12:33.334 } 00:12:33.334 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:33.334 fio-3.35 00:12:33.334 Starting 1 thread 00:12:39.933 00:12:39.933 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82413: Sat Dec 14 01:12:12 2024 00:12:39.933 read: IOPS=33.4k, BW=130MiB/s (137MB/s)(652MiB/5001msec) 00:12:39.933 slat (usec): min=4, max=2243, avg=17.44, stdev=88.15 00:12:39.933 clat (usec): min=96, max=5971, avg=1432.27, stdev=497.19 00:12:39.933 lat (usec): min=189, max=5976, avg=1449.71, stdev=487.52 00:12:39.933 clat percentiles (usec): 00:12:39.933 | 1.00th=[ 314], 5.00th=[ 627], 10.00th=[ 799], 20.00th=[ 1012], 00:12:39.933 | 30.00th=[ 1188], 40.00th=[ 1319], 50.00th=[ 1450], 60.00th=[ 1565], 00:12:39.933 | 70.00th=[ 1680], 80.00th=[ 1795], 90.00th=[ 2008], 95.00th=[ 2212], 00:12:39.933 | 99.00th=[ 2802], 99.50th=[ 3097], 99.90th=[ 3818], 99.95th=[ 4113], 00:12:39.933 | 99.99th=[ 4490] 00:12:39.933 bw ( KiB/s): min=130032, max=138832, per=100.00%, avg=133788.44, stdev=3434.99, samples=9 00:12:39.933 iops : min=32508, max=34708, avg=33447.11, stdev=858.75, samples=9 00:12:39.933 lat (usec) : 100=0.01%, 250=0.46%, 500=2.28%, 750=5.66%, 1000=11.00% 00:12:39.933 lat (msec) : 2=70.50%, 4=10.03%, 10=0.06% 00:12:39.933 cpu : usr=55.26%, sys=37.56%, ctx=15, majf=0, minf=1065 00:12:39.933 IO depths : 1=0.7%, 2=1.7%, 4=3.8%, 8=9.1%, 16=22.9%, 32=59.6%, >=64=2.1% 00:12:39.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.933 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:39.933 issued rwts: total=166928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.933 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:39.933 00:12:39.933 Run status group 0 (all jobs): 00:12:39.933 READ: bw=130MiB/s (137MB/s), 130MiB/s-130MiB/s (137MB/s-137MB/s), io=652MiB (684MB), run=5001-5001msec 00:12:39.933 ----------------------------------------------------- 00:12:39.933 Suppressions used: 00:12:39.933 count bytes template 00:12:39.933 1 11 /usr/src/fio/parse.c 00:12:39.933 1 8 libtcmalloc_minimal.so 00:12:39.933 1 904 libcrypto.so 00:12:39.933 ----------------------------------------------------- 00:12:39.933 00:12:39.933 01:12:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:39.934 01:12:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:39.934 01:12:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:39.934 01:12:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:39.934 01:12:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:39.934 01:12:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:39.934 01:12:12 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:39.934 01:12:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:39.934 01:12:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:39.934 01:12:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:39.934 01:12:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:39.934 01:12:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:39.934 01:12:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:39.934 01:12:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:39.934 01:12:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:39.934 01:12:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:39.934 01:12:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:39.934 01:12:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:39.934 01:12:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:39.934 01:12:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:39.934 01:12:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:39.934 { 00:12:39.934 "subsystems": [ 00:12:39.934 { 00:12:39.934 "subsystem": "bdev", 00:12:39.934 "config": [ 00:12:39.934 { 00:12:39.934 "params": { 00:12:39.934 "io_mechanism": "libaio", 00:12:39.934 "conserve_cpu": false, 00:12:39.934 "filename": "/dev/nvme0n1", 00:12:39.934 "name": "xnvme_bdev" 00:12:39.934 }, 00:12:39.934 "method": "bdev_xnvme_create" 00:12:39.934 }, 00:12:39.934 { 00:12:39.934 "method": "bdev_wait_for_examine" 00:12:39.934 } 00:12:39.934 ] 00:12:39.934 } 00:12:39.934 ] 00:12:39.934 } 00:12:39.934 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:39.934 fio-3.35 00:12:39.934 Starting 1 thread 00:12:45.255 00:12:45.255 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82494: Sat Dec 14 01:12:18 2024 00:12:45.255 write: IOPS=35.3k, BW=138MiB/s (145MB/s)(691MiB/5002msec); 0 zone resets 00:12:45.255 slat (usec): min=4, max=2306, avg=18.74, stdev=85.33 00:12:45.255 clat (usec): min=108, max=6718, avg=1300.37, stdev=519.86 00:12:45.255 lat (usec): min=207, max=6722, avg=1319.11, stdev=512.35 00:12:45.255 clat percentiles (usec): 00:12:45.255 | 1.00th=[ 293], 5.00th=[ 529], 10.00th=[ 668], 20.00th=[ 865], 00:12:45.255 | 30.00th=[ 1012], 40.00th=[ 1139], 50.00th=[ 1270], 60.00th=[ 1401], 00:12:45.255 | 70.00th=[ 1532], 80.00th=[ 1696], 90.00th=[ 1926], 95.00th=[ 2180], 00:12:45.255 | 99.00th=[ 2802], 99.50th=[ 3163], 99.90th=[ 3785], 99.95th=[ 4015], 00:12:45.255 | 99.99th=[ 5211] 00:12:45.255 bw ( KiB/s): min=134096, max=153032, per=99.73%, avg=140991.11, stdev=6061.35, samples=9 00:12:45.255 iops : min=33524, max=38258, avg=35247.78, stdev=1515.34, samples=9 00:12:45.255 lat (usec) : 250=0.59%, 500=3.69%, 750=9.30%, 1000=15.32% 00:12:45.255 lat (msec) : 2=62.96%, 4=8.08%, 10=0.05% 00:12:45.255 cpu : usr=47.23%, sys=43.85%, ctx=9, majf=0, minf=1066 00:12:45.255 IO depths : 1=0.6%, 2=1.3%, 4=3.2%, 8=8.3%, 16=22.7%, 32=61.7%, >=64=2.1% 00:12:45.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.255 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:45.255 issued rwts: total=0,176793,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:45.255 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:45.255 00:12:45.255 Run status group 0 (all jobs): 00:12:45.255 WRITE: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=691MiB (724MB), run=5002-5002msec 00:12:45.255 ----------------------------------------------------- 00:12:45.255 Suppressions used: 00:12:45.255 count bytes template 00:12:45.255 1 11 /usr/src/fio/parse.c 00:12:45.255 1 8 libtcmalloc_minimal.so 00:12:45.255 1 904 libcrypto.so 00:12:45.255 ----------------------------------------------------- 00:12:45.255 00:12:45.255 00:12:45.255 real 0m12.054s 00:12:45.255 user 0m6.265s 00:12:45.255 sys 0m4.605s 00:12:45.255 01:12:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.255 01:12:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:45.255 ************************************ 00:12:45.255 END TEST xnvme_fio_plugin 00:12:45.255 ************************************ 00:12:45.255 01:12:18 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:45.255 01:12:18 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:45.255 01:12:18 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:45.255 01:12:18 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:45.255 01:12:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:45.255 01:12:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.255 01:12:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:45.255 ************************************ 00:12:45.255 START TEST xnvme_rpc 00:12:45.255 ************************************ 00:12:45.255 01:12:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:45.255 01:12:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:45.255 01:12:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:45.255 01:12:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:45.255 01:12:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:45.255 01:12:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82569 00:12:45.255 01:12:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82569 00:12:45.255 01:12:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82569 ']' 00:12:45.255 01:12:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:45.255 01:12:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.255 01:12:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.255 01:12:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.255 01:12:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.255 01:12:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.514 [2024-12-14 01:12:18.898744] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:45.514 [2024-12-14 01:12:18.898862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82569 ] 00:12:45.514 [2024-12-14 01:12:19.043774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.514 [2024-12-14 01:12:19.063063] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.453 xnvme_bdev 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82569 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82569 ']' 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82569 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82569 00:12:46.453 killing process with pid 82569 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82569' 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82569 00:12:46.453 01:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82569 00:12:46.712 ************************************ 00:12:46.712 END TEST xnvme_rpc 00:12:46.712 ************************************ 00:12:46.712 00:12:46.712 real 0m1.333s 00:12:46.712 user 0m1.450s 00:12:46.712 sys 0m0.329s 00:12:46.712 01:12:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.712 01:12:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.712 01:12:20 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:46.712 01:12:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:46.712 01:12:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.712 01:12:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:46.712 ************************************ 00:12:46.712 START TEST xnvme_bdevperf 00:12:46.712 ************************************ 00:12:46.712 01:12:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:46.712 01:12:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:46.712 01:12:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:46.712 01:12:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:46.712 01:12:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:46.712 01:12:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:46.712 01:12:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:46.712 01:12:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:46.712 { 00:12:46.712 "subsystems": [ 00:12:46.712 { 00:12:46.712 "subsystem": "bdev", 00:12:46.712 "config": [ 00:12:46.712 { 00:12:46.712 "params": { 00:12:46.712 "io_mechanism": "libaio", 00:12:46.712 "conserve_cpu": true, 00:12:46.712 "filename": "/dev/nvme0n1", 00:12:46.712 "name": "xnvme_bdev" 00:12:46.712 }, 00:12:46.712 "method": "bdev_xnvme_create" 00:12:46.712 }, 00:12:46.712 { 00:12:46.712 "method": "bdev_wait_for_examine" 00:12:46.712 } 00:12:46.712 ] 00:12:46.712 } 00:12:46.712 ] 00:12:46.712 } 00:12:46.712 [2024-12-14 01:12:20.276511] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:46.712 [2024-12-14 01:12:20.276736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82632 ] 00:12:46.971 [2024-12-14 01:12:20.414814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.971 [2024-12-14 01:12:20.434051] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.971 Running I/O for 5 seconds... 00:12:48.925 33480.00 IOPS, 130.78 MiB/s [2024-12-14T01:12:23.921Z] 34803.00 IOPS, 135.95 MiB/s [2024-12-14T01:12:24.864Z] 35479.00 IOPS, 138.59 MiB/s [2024-12-14T01:12:25.804Z] 35011.75 IOPS, 136.76 MiB/s [2024-12-14T01:12:25.804Z] 34949.20 IOPS, 136.52 MiB/s 00:12:52.192 Latency(us) 00:12:52.192 [2024-12-14T01:12:25.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.192 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:52.192 xnvme_bdev : 5.01 34920.41 136.41 0.00 0.00 1828.24 382.82 6755.25 00:12:52.192 [2024-12-14T01:12:25.804Z] =================================================================================================================== 00:12:52.192 [2024-12-14T01:12:25.804Z] Total : 34920.41 136.41 0.00 0.00 1828.24 382.82 6755.25 00:12:52.192 01:12:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:52.192 01:12:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:52.192 01:12:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:52.193 01:12:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:52.193 01:12:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:52.193 { 00:12:52.193 "subsystems": [ 00:12:52.193 { 00:12:52.193 "subsystem": "bdev", 00:12:52.193 "config": [ 00:12:52.193 { 00:12:52.193 "params": { 00:12:52.193 "io_mechanism": "libaio", 00:12:52.193 "conserve_cpu": true, 00:12:52.193 "filename": "/dev/nvme0n1", 00:12:52.193 "name": "xnvme_bdev" 00:12:52.193 }, 00:12:52.193 "method": "bdev_xnvme_create" 00:12:52.193 }, 00:12:52.193 { 00:12:52.193 "method": "bdev_wait_for_examine" 00:12:52.193 } 00:12:52.193 ] 00:12:52.193 } 00:12:52.193 ] 00:12:52.193 } 00:12:52.193 [2024-12-14 01:12:25.763275] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:52.193 [2024-12-14 01:12:25.763376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82696 ] 00:12:52.453 [2024-12-14 01:12:25.914213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.453 [2024-12-14 01:12:25.940833] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.453 Running I/O for 5 seconds... 00:12:54.781 33517.00 IOPS, 130.93 MiB/s [2024-12-14T01:12:29.335Z] 34118.50 IOPS, 133.28 MiB/s [2024-12-14T01:12:30.279Z] 33964.67 IOPS, 132.67 MiB/s [2024-12-14T01:12:31.224Z] 32770.00 IOPS, 128.01 MiB/s 00:12:57.612 Latency(us) 00:12:57.612 [2024-12-14T01:12:31.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.612 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:57.612 xnvme_bdev : 5.00 32726.64 127.84 0.00 0.00 1950.93 456.86 6377.16 00:12:57.612 [2024-12-14T01:12:31.224Z] =================================================================================================================== 00:12:57.612 [2024-12-14T01:12:31.224Z] Total : 32726.64 127.84 0.00 0.00 1950.93 456.86 6377.16 00:12:57.873 ************************************ 00:12:57.873 END TEST xnvme_bdevperf 00:12:57.873 ************************************ 00:12:57.873 00:12:57.873 real 0m11.041s 00:12:57.873 user 0m3.404s 00:12:57.873 sys 0m6.049s 00:12:57.873 01:12:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.873 01:12:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:57.873 01:12:31 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:57.873 01:12:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:57.873 01:12:31 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.873 01:12:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.873 ************************************ 00:12:57.873 START TEST xnvme_fio_plugin 00:12:57.873 ************************************ 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:57.873 01:12:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:57.873 { 00:12:57.873 "subsystems": [ 00:12:57.873 { 00:12:57.873 "subsystem": "bdev", 00:12:57.873 "config": [ 00:12:57.873 { 00:12:57.873 "params": { 00:12:57.873 "io_mechanism": "libaio", 00:12:57.873 "conserve_cpu": true, 00:12:57.873 "filename": "/dev/nvme0n1", 00:12:57.873 "name": "xnvme_bdev" 00:12:57.873 }, 00:12:57.873 "method": "bdev_xnvme_create" 00:12:57.873 }, 00:12:57.873 { 00:12:57.873 "method": "bdev_wait_for_examine" 00:12:57.873 } 00:12:57.873 ] 00:12:57.873 } 00:12:57.873 ] 00:12:57.873 } 00:12:58.134 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:58.134 fio-3.35 00:12:58.134 Starting 1 thread 00:13:03.425 00:13:03.425 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82799: Sat Dec 14 01:12:36 2024 00:13:03.425 read: IOPS=31.9k, BW=125MiB/s (131MB/s)(624MiB/5001msec) 00:13:03.425 slat (usec): min=4, max=2162, avg=21.55, stdev=97.28 00:13:03.425 clat (usec): min=109, max=5514, avg=1410.64, stdev=552.28 00:13:03.425 lat (usec): min=197, max=5606, avg=1432.20, stdev=542.43 00:13:03.425 clat percentiles (usec): 00:13:03.425 | 1.00th=[ 285], 5.00th=[ 545], 10.00th=[ 725], 20.00th=[ 947], 00:13:03.425 | 30.00th=[ 1123], 40.00th=[ 1270], 50.00th=[ 1401], 60.00th=[ 1532], 00:13:03.425 | 70.00th=[ 1663], 80.00th=[ 1827], 90.00th=[ 2057], 95.00th=[ 2311], 00:13:03.425 | 99.00th=[ 3064], 99.50th=[ 3359], 99.90th=[ 3949], 99.95th=[ 4359], 00:13:03.425 | 99.99th=[ 4621] 00:13:03.425 bw ( KiB/s): min=112224, max=134312, per=99.62%, avg=127244.44, stdev=7255.72, samples=9 00:13:03.425 iops : min=28056, max=33578, avg=31811.11, stdev=1813.93, samples=9 00:13:03.425 lat (usec) : 250=0.58%, 500=3.50%, 750=6.85%, 1000=12.07% 00:13:03.425 lat (msec) : 2=65.01%, 4=11.89%, 10=0.09% 00:13:03.425 cpu : usr=43.62%, sys=48.42%, ctx=16, majf=0, minf=1065 00:13:03.425 IO depths : 1=0.6%, 2=1.3%, 4=3.2%, 8=8.4%, 16=22.7%, 32=61.8%, >=64=2.1% 00:13:03.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:03.425 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:03.425 issued rwts: total=159696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:03.425 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:03.425 00:13:03.425 Run status group 0 (all jobs): 00:13:03.425 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=624MiB (654MB), run=5001-5001msec 00:13:03.998 ----------------------------------------------------- 00:13:03.998 Suppressions used: 00:13:03.998 count bytes template 00:13:03.998 1 11 /usr/src/fio/parse.c 00:13:03.998 1 8 libtcmalloc_minimal.so 00:13:03.998 1 904 libcrypto.so 00:13:03.998 ----------------------------------------------------- 00:13:03.998 00:13:03.998 01:12:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:03.998 01:12:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:03.998 01:12:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:03.998 01:12:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:03.998 01:12:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:03.998 01:12:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:03.998 01:12:37 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:03.998 01:12:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:03.998 01:12:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:03.998 01:12:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:03.998 01:12:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:03.998 01:12:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:03.998 01:12:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:03.998 01:12:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:03.998 01:12:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:03.998 01:12:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:03.998 01:12:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:03.998 01:12:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:03.998 01:12:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:03.998 01:12:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:03.998 01:12:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:03.998 { 00:13:03.998 "subsystems": [ 00:13:03.998 { 00:13:03.998 "subsystem": "bdev", 00:13:03.998 "config": [ 00:13:03.998 { 00:13:03.998 "params": { 00:13:03.998 "io_mechanism": "libaio", 00:13:03.998 "conserve_cpu": true, 00:13:03.998 "filename": "/dev/nvme0n1", 00:13:03.998 "name": "xnvme_bdev" 00:13:03.998 }, 00:13:03.998 "method": "bdev_xnvme_create" 00:13:03.998 }, 00:13:03.998 { 00:13:03.998 "method": "bdev_wait_for_examine" 00:13:03.998 } 00:13:03.998 ] 00:13:03.998 } 00:13:03.998 ] 00:13:03.998 } 00:13:03.998 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:03.998 fio-3.35 00:13:03.998 Starting 1 thread 00:13:10.585 00:13:10.585 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82885: Sat Dec 14 01:12:42 2024 00:13:10.585 write: IOPS=33.3k, BW=130MiB/s (136MB/s)(650MiB/5001msec); 0 zone resets 00:13:10.585 slat (usec): min=4, max=2072, avg=20.90, stdev=94.23 00:13:10.585 clat (usec): min=56, max=4859, avg=1355.92, stdev=523.40 00:13:10.585 lat (usec): min=196, max=4941, avg=1376.81, stdev=514.09 00:13:10.585 clat percentiles (usec): 00:13:10.585 | 1.00th=[ 293], 5.00th=[ 545], 10.00th=[ 701], 20.00th=[ 898], 00:13:10.585 | 30.00th=[ 1074], 40.00th=[ 1221], 50.00th=[ 1352], 60.00th=[ 1483], 00:13:10.585 | 70.00th=[ 1614], 80.00th=[ 1762], 90.00th=[ 1991], 95.00th=[ 2212], 00:13:10.585 | 99.00th=[ 2835], 99.50th=[ 3130], 99.90th=[ 3752], 99.95th=[ 4113], 00:13:10.585 | 99.99th=[ 4424] 00:13:10.585 bw ( KiB/s): min=127416, max=140008, per=100.00%, avg=133105.78, stdev=4416.18, samples=9 00:13:10.585 iops : min=31854, max=35002, avg=33276.44, stdev=1104.04, samples=9 00:13:10.585 lat (usec) : 100=0.01%, 250=0.55%, 500=3.48%, 750=8.08%, 1000=13.54% 00:13:10.585 lat (msec) : 2=64.88%, 4=9.40%, 10=0.07% 00:13:10.585 cpu : usr=44.66%, sys=46.98%, ctx=10, majf=0, minf=1066 00:13:10.585 IO depths : 1=0.5%, 2=1.3%, 4=3.1%, 8=8.1%, 16=22.4%, 32=62.6%, >=64=2.1% 00:13:10.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.585 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:10.585 issued rwts: total=0,166355,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.585 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:10.585 00:13:10.585 Run status group 0 (all jobs): 00:13:10.585 WRITE: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=650MiB (681MB), run=5001-5001msec 00:13:10.585 ----------------------------------------------------- 00:13:10.585 Suppressions used: 00:13:10.585 count bytes template 00:13:10.585 1 11 /usr/src/fio/parse.c 00:13:10.585 1 8 libtcmalloc_minimal.so 00:13:10.585 1 904 libcrypto.so 00:13:10.585 ----------------------------------------------------- 00:13:10.585 00:13:10.585 ************************************ 00:13:10.585 END TEST xnvme_fio_plugin 00:13:10.585 ************************************ 00:13:10.585 00:13:10.585 real 0m12.061s 00:13:10.585 user 0m5.550s 00:13:10.585 sys 0m5.315s 00:13:10.585 01:12:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.585 01:12:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:10.585 01:12:43 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:10.585 01:12:43 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:10.585 01:12:43 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:10.585 01:12:43 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:10.585 01:12:43 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:10.585 01:12:43 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:10.585 01:12:43 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:10.585 01:12:43 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:10.585 01:12:43 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:10.585 01:12:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:10.585 01:12:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.585 01:12:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.585 ************************************ 00:13:10.585 START TEST xnvme_rpc 00:13:10.585 ************************************ 00:13:10.585 01:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:10.585 01:12:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:10.585 01:12:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:10.585 01:12:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:10.585 01:12:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:10.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.585 01:12:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82960 00:13:10.585 01:12:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82960 00:13:10.585 01:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82960 ']' 00:13:10.585 01:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.585 01:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:10.585 01:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.585 01:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:10.585 01:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.585 01:12:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:10.585 [2024-12-14 01:12:43.545955] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:10.585 [2024-12-14 01:12:43.546341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82960 ] 00:13:10.585 [2024-12-14 01:12:43.695412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.585 [2024-12-14 01:12:43.724040] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.847 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.847 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:10.847 01:12:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:10.847 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.847 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.847 xnvme_bdev 00:13:10.847 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.847 01:12:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:10.847 01:12:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:10.847 01:12:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:10.847 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.847 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.847 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.847 01:12:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:10.847 01:12:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:10.847 01:12:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:10.847 01:12:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:10.847 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.847 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82960 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82960 ']' 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82960 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82960 00:13:11.108 killing process with pid 82960 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82960' 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82960 00:13:11.108 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82960 00:13:11.369 ************************************ 00:13:11.369 END TEST xnvme_rpc 00:13:11.369 ************************************ 00:13:11.369 00:13:11.369 real 0m1.421s 00:13:11.369 user 0m1.510s 00:13:11.369 sys 0m0.399s 00:13:11.369 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.369 01:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.369 01:12:44 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:11.369 01:12:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:11.369 01:12:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.369 01:12:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.369 ************************************ 00:13:11.369 START TEST xnvme_bdevperf 00:13:11.369 ************************************ 00:13:11.369 01:12:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:11.369 01:12:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:11.369 01:12:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:11.369 01:12:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:11.369 01:12:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:11.369 01:12:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:11.369 01:12:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:11.369 01:12:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:11.630 { 00:13:11.630 "subsystems": [ 00:13:11.630 { 00:13:11.630 "subsystem": "bdev", 00:13:11.630 "config": [ 00:13:11.630 { 00:13:11.630 "params": { 00:13:11.630 "io_mechanism": "io_uring", 00:13:11.630 "conserve_cpu": false, 00:13:11.630 "filename": "/dev/nvme0n1", 00:13:11.630 "name": "xnvme_bdev" 00:13:11.630 }, 00:13:11.630 "method": "bdev_xnvme_create" 00:13:11.630 }, 00:13:11.630 { 00:13:11.630 "method": "bdev_wait_for_examine" 00:13:11.630 } 00:13:11.630 ] 00:13:11.630 } 00:13:11.630 ] 00:13:11.630 } 00:13:11.630 [2024-12-14 01:12:45.020478] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:11.630 [2024-12-14 01:12:45.020797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83018 ] 00:13:11.630 [2024-12-14 01:12:45.168008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.630 [2024-12-14 01:12:45.196604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.905 Running I/O for 5 seconds... 00:13:13.832 32064.00 IOPS, 125.25 MiB/s [2024-12-14T01:12:48.387Z] 31968.00 IOPS, 124.88 MiB/s [2024-12-14T01:12:49.329Z] 31957.33 IOPS, 124.83 MiB/s [2024-12-14T01:12:50.716Z] 32000.00 IOPS, 125.00 MiB/s 00:13:17.104 Latency(us) 00:13:17.104 [2024-12-14T01:12:50.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.104 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:17.104 xnvme_bdev : 5.00 31893.38 124.58 0.00 0.00 2002.97 1209.90 4234.63 00:13:17.104 [2024-12-14T01:12:50.716Z] =================================================================================================================== 00:13:17.104 [2024-12-14T01:12:50.716Z] Total : 31893.38 124.58 0.00 0.00 2002.97 1209.90 4234.63 00:13:17.104 01:12:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:17.104 01:12:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:17.104 01:12:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:17.104 01:12:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:17.104 01:12:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:17.104 { 00:13:17.104 "subsystems": [ 00:13:17.104 { 00:13:17.104 "subsystem": "bdev", 00:13:17.104 "config": [ 00:13:17.104 { 00:13:17.104 "params": { 00:13:17.104 "io_mechanism": "io_uring", 00:13:17.104 "conserve_cpu": false, 00:13:17.104 "filename": "/dev/nvme0n1", 00:13:17.104 "name": "xnvme_bdev" 00:13:17.104 }, 00:13:17.104 "method": "bdev_xnvme_create" 00:13:17.104 }, 00:13:17.104 { 00:13:17.104 "method": "bdev_wait_for_examine" 00:13:17.104 } 00:13:17.104 ] 00:13:17.104 } 00:13:17.104 ] 00:13:17.104 } 00:13:17.104 [2024-12-14 01:12:50.549011] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:17.104 [2024-12-14 01:12:50.549350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83083 ] 00:13:17.104 [2024-12-14 01:12:50.696117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.366 [2024-12-14 01:12:50.726313] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.366 Running I/O for 5 seconds... 00:13:19.254 33691.00 IOPS, 131.61 MiB/s [2024-12-14T01:12:54.253Z] 33564.00 IOPS, 131.11 MiB/s [2024-12-14T01:12:55.196Z] 33201.33 IOPS, 129.69 MiB/s [2024-12-14T01:12:56.138Z] 33154.50 IOPS, 129.51 MiB/s 00:13:22.526 Latency(us) 00:13:22.526 [2024-12-14T01:12:56.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.526 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:22.526 xnvme_bdev : 5.00 32991.54 128.87 0.00 0.00 1935.98 367.06 6856.07 00:13:22.526 [2024-12-14T01:12:56.138Z] =================================================================================================================== 00:13:22.526 [2024-12-14T01:12:56.138Z] Total : 32991.54 128.87 0.00 0.00 1935.98 367.06 6856.07 00:13:22.526 00:13:22.526 real 0m11.057s 00:13:22.526 user 0m4.449s 00:13:22.526 sys 0m6.352s 00:13:22.526 ************************************ 00:13:22.526 END TEST xnvme_bdevperf 00:13:22.526 ************************************ 00:13:22.526 01:12:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.526 01:12:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:22.526 01:12:56 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:22.526 01:12:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:22.526 01:12:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.526 01:12:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:22.526 ************************************ 00:13:22.526 START TEST xnvme_fio_plugin 00:13:22.526 ************************************ 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:22.526 01:12:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:22.526 { 00:13:22.526 "subsystems": [ 00:13:22.526 { 00:13:22.526 "subsystem": "bdev", 00:13:22.526 "config": [ 00:13:22.526 { 00:13:22.526 "params": { 00:13:22.526 "io_mechanism": "io_uring", 00:13:22.526 "conserve_cpu": false, 00:13:22.526 "filename": "/dev/nvme0n1", 00:13:22.526 "name": "xnvme_bdev" 00:13:22.526 }, 00:13:22.526 "method": "bdev_xnvme_create" 00:13:22.526 }, 00:13:22.526 { 00:13:22.526 "method": "bdev_wait_for_examine" 00:13:22.526 } 00:13:22.526 ] 00:13:22.526 } 00:13:22.526 ] 00:13:22.527 } 00:13:22.788 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:22.788 fio-3.35 00:13:22.788 Starting 1 thread 00:13:28.076 00:13:28.076 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83191: Sat Dec 14 01:13:01 2024 00:13:28.077 read: IOPS=32.3k, BW=126MiB/s (132MB/s)(632MiB/5001msec) 00:13:28.077 slat (nsec): min=2874, max=87820, avg=3345.22, stdev=1663.47 00:13:28.077 clat (usec): min=911, max=3797, avg=1844.76, stdev=317.05 00:13:28.077 lat (usec): min=914, max=3831, avg=1848.11, stdev=317.29 00:13:28.077 clat percentiles (usec): 00:13:28.077 | 1.00th=[ 1205], 5.00th=[ 1369], 10.00th=[ 1467], 20.00th=[ 1582], 00:13:28.077 | 30.00th=[ 1663], 40.00th=[ 1745], 50.00th=[ 1827], 60.00th=[ 1909], 00:13:28.077 | 70.00th=[ 2008], 80.00th=[ 2114], 90.00th=[ 2245], 95.00th=[ 2376], 00:13:28.077 | 99.00th=[ 2704], 99.50th=[ 2868], 99.90th=[ 3130], 99.95th=[ 3195], 00:13:28.077 | 99.99th=[ 3621] 00:13:28.077 bw ( KiB/s): min=121856, max=136192, per=99.96%, avg=129308.44, stdev=4500.88, samples=9 00:13:28.077 iops : min=30464, max=34048, avg=32327.11, stdev=1125.22, samples=9 00:13:28.077 lat (usec) : 1000=0.02% 00:13:28.077 lat (msec) : 2=69.96%, 4=30.02% 00:13:28.077 cpu : usr=31.84%, sys=67.12%, ctx=10, majf=0, minf=1063 00:13:28.077 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:28.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.077 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:28.077 issued rwts: total=161728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.077 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:28.077 00:13:28.077 Run status group 0 (all jobs): 00:13:28.077 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=632MiB (662MB), run=5001-5001msec 00:13:28.647 ----------------------------------------------------- 00:13:28.647 Suppressions used: 00:13:28.647 count bytes template 00:13:28.647 1 11 /usr/src/fio/parse.c 00:13:28.647 1 8 libtcmalloc_minimal.so 00:13:28.647 1 904 libcrypto.so 00:13:28.647 ----------------------------------------------------- 00:13:28.647 00:13:28.647 01:13:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:28.647 01:13:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:28.647 01:13:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:28.647 01:13:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:28.647 01:13:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:28.647 01:13:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:28.647 01:13:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:28.647 01:13:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:28.647 01:13:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:28.647 01:13:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:28.647 01:13:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:28.647 01:13:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:28.647 01:13:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:28.647 01:13:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:28.647 01:13:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:28.647 01:13:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:28.647 01:13:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:28.647 01:13:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:28.647 01:13:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:28.647 01:13:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:28.647 01:13:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:28.647 { 00:13:28.647 "subsystems": [ 00:13:28.647 { 00:13:28.647 "subsystem": "bdev", 00:13:28.647 "config": [ 00:13:28.647 { 00:13:28.647 "params": { 00:13:28.647 "io_mechanism": "io_uring", 00:13:28.647 "conserve_cpu": false, 00:13:28.647 "filename": "/dev/nvme0n1", 00:13:28.647 "name": "xnvme_bdev" 00:13:28.647 }, 00:13:28.647 "method": "bdev_xnvme_create" 00:13:28.647 }, 00:13:28.647 { 00:13:28.647 "method": "bdev_wait_for_examine" 00:13:28.647 } 00:13:28.647 ] 00:13:28.647 } 00:13:28.647 ] 00:13:28.647 } 00:13:28.647 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:28.647 fio-3.35 00:13:28.647 Starting 1 thread 00:13:34.057 00:13:34.057 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83272: Sat Dec 14 01:13:07 2024 00:13:34.057 write: IOPS=33.9k, BW=132MiB/s (139MB/s)(662MiB/5001msec); 0 zone resets 00:13:34.057 slat (usec): min=2, max=108, avg= 3.43, stdev= 1.64 00:13:34.057 clat (usec): min=364, max=4323, avg=1751.69, stdev=309.97 00:13:34.057 lat (usec): min=367, max=4326, avg=1755.12, stdev=310.14 00:13:34.057 clat percentiles (usec): 00:13:34.057 | 1.00th=[ 1172], 5.00th=[ 1303], 10.00th=[ 1385], 20.00th=[ 1483], 00:13:34.057 | 30.00th=[ 1565], 40.00th=[ 1647], 50.00th=[ 1729], 60.00th=[ 1795], 00:13:34.057 | 70.00th=[ 1893], 80.00th=[ 2008], 90.00th=[ 2147], 95.00th=[ 2278], 00:13:34.057 | 99.00th=[ 2573], 99.50th=[ 2704], 99.90th=[ 3261], 99.95th=[ 3523], 00:13:34.057 | 99.99th=[ 4178] 00:13:34.057 bw ( KiB/s): min=133072, max=140208, per=100.00%, avg=136077.33, stdev=2595.08, samples=9 00:13:34.057 iops : min=33268, max=35052, avg=34019.33, stdev=648.77, samples=9 00:13:34.057 lat (usec) : 500=0.01%, 1000=0.04% 00:13:34.057 lat (msec) : 2=79.86%, 4=20.05%, 10=0.04% 00:13:34.057 cpu : usr=31.78%, sys=67.16%, ctx=13, majf=0, minf=1064 00:13:34.057 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:13:34.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.057 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:34.057 issued rwts: total=0,169407,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.057 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:34.057 00:13:34.057 Run status group 0 (all jobs): 00:13:34.057 WRITE: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=662MiB (694MB), run=5001-5001msec 00:13:34.318 ----------------------------------------------------- 00:13:34.318 Suppressions used: 00:13:34.318 count bytes template 00:13:34.318 1 11 /usr/src/fio/parse.c 00:13:34.318 1 8 libtcmalloc_minimal.so 00:13:34.318 1 904 libcrypto.so 00:13:34.318 ----------------------------------------------------- 00:13:34.318 00:13:34.318 00:13:34.318 real 0m11.848s 00:13:34.318 user 0m4.230s 00:13:34.318 sys 0m7.185s 00:13:34.318 ************************************ 00:13:34.318 END TEST xnvme_fio_plugin 00:13:34.318 ************************************ 00:13:34.318 01:13:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.318 01:13:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:34.578 01:13:07 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:34.578 01:13:07 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:34.578 01:13:07 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:34.578 01:13:07 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:34.578 01:13:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:34.578 01:13:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.578 01:13:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:34.578 ************************************ 00:13:34.578 START TEST xnvme_rpc 00:13:34.578 ************************************ 00:13:34.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.578 01:13:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:34.578 01:13:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:34.578 01:13:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:34.578 01:13:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:34.578 01:13:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:34.578 01:13:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83347 00:13:34.578 01:13:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83347 00:13:34.578 01:13:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83347 ']' 00:13:34.578 01:13:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.578 01:13:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:34.578 01:13:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:34.578 01:13:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.578 01:13:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:34.578 01:13:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.578 [2024-12-14 01:13:08.057517] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:34.578 [2024-12-14 01:13:08.057670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83347 ] 00:13:34.837 [2024-12-14 01:13:08.194607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.837 [2024-12-14 01:13:08.214077] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.407 xnvme_bdev 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.407 01:13:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:35.407 01:13:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83347 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83347 ']' 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83347 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83347 00:13:35.668 killing process with pid 83347 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83347' 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83347 00:13:35.668 01:13:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83347 00:13:35.929 00:13:35.929 real 0m1.472s 00:13:35.929 user 0m1.600s 00:13:35.929 sys 0m0.350s 00:13:35.929 01:13:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.929 01:13:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.929 ************************************ 00:13:35.929 END TEST xnvme_rpc 00:13:35.929 ************************************ 00:13:35.929 01:13:09 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:35.929 01:13:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:35.929 01:13:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.929 01:13:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:35.929 ************************************ 00:13:35.929 START TEST xnvme_bdevperf 00:13:35.929 ************************************ 00:13:35.929 01:13:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:35.929 01:13:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:35.929 01:13:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:35.929 01:13:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:35.929 01:13:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:35.929 01:13:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:35.929 01:13:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:35.929 01:13:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:36.189 { 00:13:36.189 "subsystems": [ 00:13:36.189 { 00:13:36.189 "subsystem": "bdev", 00:13:36.189 "config": [ 00:13:36.189 { 00:13:36.189 "params": { 00:13:36.189 "io_mechanism": "io_uring", 00:13:36.189 "conserve_cpu": true, 00:13:36.189 "filename": "/dev/nvme0n1", 00:13:36.189 "name": "xnvme_bdev" 00:13:36.189 }, 00:13:36.189 "method": "bdev_xnvme_create" 00:13:36.189 }, 00:13:36.189 { 00:13:36.189 "method": "bdev_wait_for_examine" 00:13:36.189 } 00:13:36.189 ] 00:13:36.189 } 00:13:36.189 ] 00:13:36.189 } 00:13:36.189 [2024-12-14 01:13:09.593335] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:36.189 [2024-12-14 01:13:09.593467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83410 ] 00:13:36.189 [2024-12-14 01:13:09.739822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.189 [2024-12-14 01:13:09.761166] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.450 Running I/O for 5 seconds... 00:13:38.330 35008.00 IOPS, 136.75 MiB/s [2024-12-14T01:13:12.883Z] 34496.00 IOPS, 134.75 MiB/s [2024-12-14T01:13:14.262Z] 34538.67 IOPS, 134.92 MiB/s [2024-12-14T01:13:15.203Z] 34992.00 IOPS, 136.69 MiB/s [2024-12-14T01:13:15.203Z] 35616.80 IOPS, 139.13 MiB/s 00:13:41.591 Latency(us) 00:13:41.591 [2024-12-14T01:13:15.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.591 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:41.591 xnvme_bdev : 5.00 35609.98 139.10 0.00 0.00 1793.58 888.52 8721.33 00:13:41.591 [2024-12-14T01:13:15.203Z] =================================================================================================================== 00:13:41.591 [2024-12-14T01:13:15.203Z] Total : 35609.98 139.10 0.00 0.00 1793.58 888.52 8721.33 00:13:41.591 01:13:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:41.591 01:13:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:41.591 01:13:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:41.591 01:13:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:41.591 01:13:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:41.591 { 00:13:41.591 "subsystems": [ 00:13:41.591 { 00:13:41.591 "subsystem": "bdev", 00:13:41.591 "config": [ 00:13:41.591 { 00:13:41.591 "params": { 00:13:41.591 "io_mechanism": "io_uring", 00:13:41.591 "conserve_cpu": true, 00:13:41.591 "filename": "/dev/nvme0n1", 00:13:41.591 "name": "xnvme_bdev" 00:13:41.591 }, 00:13:41.591 "method": "bdev_xnvme_create" 00:13:41.591 }, 00:13:41.591 { 00:13:41.591 "method": "bdev_wait_for_examine" 00:13:41.591 } 00:13:41.591 ] 00:13:41.591 } 00:13:41.591 ] 00:13:41.591 } 00:13:41.591 [2024-12-14 01:13:15.056883] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:41.591 [2024-12-14 01:13:15.056993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83474 ] 00:13:41.853 [2024-12-14 01:13:15.203070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.853 [2024-12-14 01:13:15.221990] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.853 Running I/O for 5 seconds... 00:13:43.736 36527.00 IOPS, 142.68 MiB/s [2024-12-14T01:13:18.730Z] 37170.50 IOPS, 145.20 MiB/s [2024-12-14T01:13:19.675Z] 37532.67 IOPS, 146.61 MiB/s [2024-12-14T01:13:20.619Z] 36649.00 IOPS, 143.16 MiB/s [2024-12-14T01:13:20.619Z] 35897.80 IOPS, 140.23 MiB/s 00:13:47.007 Latency(us) 00:13:47.007 [2024-12-14T01:13:20.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.007 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:47.007 xnvme_bdev : 5.01 35873.72 140.13 0.00 0.00 1780.20 819.20 7763.50 00:13:47.007 [2024-12-14T01:13:20.620Z] =================================================================================================================== 00:13:47.008 [2024-12-14T01:13:20.620Z] Total : 35873.72 140.13 0.00 0.00 1780.20 819.20 7763.50 00:13:47.008 00:13:47.008 real 0m10.960s 00:13:47.008 user 0m7.700s 00:13:47.008 sys 0m2.800s 00:13:47.008 01:13:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.008 ************************************ 00:13:47.008 END TEST xnvme_bdevperf 00:13:47.008 ************************************ 00:13:47.008 01:13:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:47.008 01:13:20 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:47.008 01:13:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:47.008 01:13:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.008 01:13:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:47.008 ************************************ 00:13:47.008 START TEST xnvme_fio_plugin 00:13:47.008 ************************************ 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:47.008 01:13:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:47.008 { 00:13:47.008 "subsystems": [ 00:13:47.008 { 00:13:47.008 "subsystem": "bdev", 00:13:47.008 "config": [ 00:13:47.008 { 00:13:47.008 "params": { 00:13:47.008 "io_mechanism": "io_uring", 00:13:47.008 "conserve_cpu": true, 00:13:47.008 "filename": "/dev/nvme0n1", 00:13:47.008 "name": "xnvme_bdev" 00:13:47.008 }, 00:13:47.008 "method": "bdev_xnvme_create" 00:13:47.008 }, 00:13:47.008 { 00:13:47.008 "method": "bdev_wait_for_examine" 00:13:47.008 } 00:13:47.008 ] 00:13:47.008 } 00:13:47.008 ] 00:13:47.008 } 00:13:47.270 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:47.270 fio-3.35 00:13:47.270 Starting 1 thread 00:13:52.565 00:13:52.565 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83577: Sat Dec 14 01:13:26 2024 00:13:52.565 read: IOPS=30.9k, BW=121MiB/s (127MB/s)(605MiB/5001msec) 00:13:52.565 slat (nsec): min=2880, max=79222, avg=3467.23, stdev=1773.40 00:13:52.565 clat (usec): min=202, max=3429, avg=1926.80, stdev=290.93 00:13:52.565 lat (usec): min=210, max=3462, avg=1930.27, stdev=291.19 00:13:52.565 clat percentiles (usec): 00:13:52.565 | 1.00th=[ 1385], 5.00th=[ 1500], 10.00th=[ 1582], 20.00th=[ 1680], 00:13:52.565 | 30.00th=[ 1762], 40.00th=[ 1827], 50.00th=[ 1909], 60.00th=[ 1975], 00:13:52.565 | 70.00th=[ 2057], 80.00th=[ 2147], 90.00th=[ 2311], 95.00th=[ 2442], 00:13:52.565 | 99.00th=[ 2737], 99.50th=[ 2900], 99.90th=[ 3097], 99.95th=[ 3163], 00:13:52.565 | 99.99th=[ 3294] 00:13:52.565 bw ( KiB/s): min=120079, max=128512, per=100.00%, avg=123934.11, stdev=2288.16, samples=9 00:13:52.565 iops : min=30019, max=32128, avg=30983.44, stdev=572.20, samples=9 00:13:52.565 lat (usec) : 250=0.01% 00:13:52.565 lat (msec) : 2=62.72%, 4=37.28% 00:13:52.565 cpu : usr=64.32%, sys=31.98%, ctx=65, majf=0, minf=1063 00:13:52.565 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:52.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.565 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:52.565 issued rwts: total=154753,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.565 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:52.565 00:13:52.565 Run status group 0 (all jobs): 00:13:52.565 READ: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=605MiB (634MB), run=5001-5001msec 00:13:53.136 ----------------------------------------------------- 00:13:53.136 Suppressions used: 00:13:53.136 count bytes template 00:13:53.136 1 11 /usr/src/fio/parse.c 00:13:53.136 1 8 libtcmalloc_minimal.so 00:13:53.136 1 904 libcrypto.so 00:13:53.136 ----------------------------------------------------- 00:13:53.136 00:13:53.136 01:13:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:53.136 01:13:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:53.136 01:13:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:53.136 01:13:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:53.136 01:13:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:53.136 01:13:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:53.136 01:13:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:53.136 01:13:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:53.136 01:13:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:53.136 01:13:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:53.136 01:13:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:53.136 01:13:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:53.136 01:13:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:53.136 01:13:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:53.136 01:13:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:53.136 01:13:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:53.136 01:13:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:53.136 01:13:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:53.136 01:13:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:53.136 01:13:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:53.136 01:13:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:53.136 { 00:13:53.136 "subsystems": [ 00:13:53.136 { 00:13:53.136 "subsystem": "bdev", 00:13:53.136 "config": [ 00:13:53.136 { 00:13:53.136 "params": { 00:13:53.136 "io_mechanism": "io_uring", 00:13:53.136 "conserve_cpu": true, 00:13:53.136 "filename": "/dev/nvme0n1", 00:13:53.136 "name": "xnvme_bdev" 00:13:53.136 }, 00:13:53.136 "method": "bdev_xnvme_create" 00:13:53.136 }, 00:13:53.136 { 00:13:53.136 "method": "bdev_wait_for_examine" 00:13:53.136 } 00:13:53.136 ] 00:13:53.136 } 00:13:53.136 ] 00:13:53.136 } 00:13:53.397 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:53.397 fio-3.35 00:13:53.397 Starting 1 thread 00:13:58.686 00:13:58.686 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83663: Sat Dec 14 01:13:32 2024 00:13:58.687 write: IOPS=32.0k, BW=125MiB/s (131MB/s)(626MiB/5001msec); 0 zone resets 00:13:58.687 slat (usec): min=2, max=349, avg= 3.61, stdev= 3.03 00:13:58.687 clat (usec): min=922, max=7097, avg=1852.57, stdev=289.36 00:13:58.687 lat (usec): min=926, max=7100, avg=1856.18, stdev=289.70 00:13:58.687 clat percentiles (usec): 00:13:58.687 | 1.00th=[ 1319], 5.00th=[ 1450], 10.00th=[ 1516], 20.00th=[ 1614], 00:13:58.687 | 30.00th=[ 1680], 40.00th=[ 1762], 50.00th=[ 1827], 60.00th=[ 1893], 00:13:58.687 | 70.00th=[ 1975], 80.00th=[ 2073], 90.00th=[ 2212], 95.00th=[ 2376], 00:13:58.687 | 99.00th=[ 2671], 99.50th=[ 2802], 99.90th=[ 3097], 99.95th=[ 3261], 00:13:58.687 | 99.99th=[ 5211] 00:13:58.687 bw ( KiB/s): min=126448, max=130048, per=100.00%, avg=128200.00, stdev=1422.04, samples=9 00:13:58.687 iops : min=31612, max=32512, avg=32050.00, stdev=355.51, samples=9 00:13:58.687 lat (usec) : 1000=0.01% 00:13:58.687 lat (msec) : 2=72.72%, 4=27.26%, 10=0.01% 00:13:58.687 cpu : usr=67.64%, sys=28.62%, ctx=65, majf=0, minf=1064 00:13:58.687 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:58.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.687 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:58.687 issued rwts: total=0,160155,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.687 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:58.687 00:13:58.687 Run status group 0 (all jobs): 00:13:58.687 WRITE: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=626MiB (656MB), run=5001-5001msec 00:13:58.948 ----------------------------------------------------- 00:13:58.948 Suppressions used: 00:13:58.948 count bytes template 00:13:58.948 1 11 /usr/src/fio/parse.c 00:13:58.948 1 8 libtcmalloc_minimal.so 00:13:58.948 1 904 libcrypto.so 00:13:58.948 ----------------------------------------------------- 00:13:58.948 00:13:58.948 ************************************ 00:13:58.948 00:13:58.948 real 0m11.982s 00:13:58.948 user 0m7.722s 00:13:58.948 sys 0m3.581s 00:13:58.948 01:13:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.948 01:13:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:58.948 END TEST xnvme_fio_plugin 00:13:58.948 ************************************ 00:13:59.210 01:13:32 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:59.210 01:13:32 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:13:59.210 01:13:32 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:13:59.210 01:13:32 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:13:59.210 01:13:32 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:59.210 01:13:32 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:59.210 01:13:32 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:59.210 01:13:32 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:59.210 01:13:32 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:59.210 01:13:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:59.210 01:13:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:59.210 01:13:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:59.210 ************************************ 00:13:59.210 START TEST xnvme_rpc 00:13:59.210 ************************************ 00:13:59.210 01:13:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:59.210 01:13:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:59.210 01:13:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:59.210 01:13:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:59.210 01:13:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:59.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.210 01:13:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83738 00:13:59.210 01:13:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83738 00:13:59.210 01:13:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83738 ']' 00:13:59.210 01:13:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.210 01:13:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.210 01:13:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.210 01:13:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.210 01:13:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.210 01:13:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:59.210 [2024-12-14 01:13:32.688581] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:59.210 [2024-12-14 01:13:32.688768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83738 ] 00:13:59.470 [2024-12-14 01:13:32.836015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.470 [2024-12-14 01:13:32.865420] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.042 xnvme_bdev 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.042 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83738 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83738 ']' 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83738 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83738 00:14:00.303 killing process with pid 83738 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83738' 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83738 00:14:00.303 01:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83738 00:14:00.564 00:14:00.564 real 0m1.407s 00:14:00.564 user 0m1.495s 00:14:00.564 sys 0m0.401s 00:14:00.564 01:13:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.564 ************************************ 00:14:00.564 END TEST xnvme_rpc 00:14:00.564 01:13:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.564 ************************************ 00:14:00.564 01:13:34 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:00.564 01:13:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:00.564 01:13:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.564 01:13:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.564 ************************************ 00:14:00.564 START TEST xnvme_bdevperf 00:14:00.564 ************************************ 00:14:00.564 01:13:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:00.564 01:13:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:00.564 01:13:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:00.564 01:13:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:00.564 01:13:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:00.564 01:13:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:00.564 01:13:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:00.564 01:13:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:00.564 { 00:14:00.564 "subsystems": [ 00:14:00.564 { 00:14:00.564 "subsystem": "bdev", 00:14:00.564 "config": [ 00:14:00.564 { 00:14:00.564 "params": { 00:14:00.564 "io_mechanism": "io_uring_cmd", 00:14:00.564 "conserve_cpu": false, 00:14:00.564 "filename": "/dev/ng0n1", 00:14:00.564 "name": "xnvme_bdev" 00:14:00.564 }, 00:14:00.564 "method": "bdev_xnvme_create" 00:14:00.564 }, 00:14:00.564 { 00:14:00.564 "method": "bdev_wait_for_examine" 00:14:00.564 } 00:14:00.564 ] 00:14:00.564 } 00:14:00.564 ] 00:14:00.564 } 00:14:00.564 [2024-12-14 01:13:34.138057] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:00.564 [2024-12-14 01:13:34.138278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83796 ] 00:14:00.824 [2024-12-14 01:13:34.284209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.824 [2024-12-14 01:13:34.303393] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.825 Running I/O for 5 seconds... 00:14:02.776 37114.00 IOPS, 144.98 MiB/s [2024-12-14T01:13:37.772Z] 36842.50 IOPS, 143.92 MiB/s [2024-12-14T01:13:38.717Z] 35641.00 IOPS, 139.22 MiB/s [2024-12-14T01:13:39.662Z] 34824.75 IOPS, 136.03 MiB/s [2024-12-14T01:13:39.662Z] 34229.60 IOPS, 133.71 MiB/s 00:14:06.050 Latency(us) 00:14:06.050 [2024-12-14T01:13:39.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.050 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:06.050 xnvme_bdev : 5.01 34203.61 133.61 0.00 0.00 1867.50 582.89 6604.01 00:14:06.050 [2024-12-14T01:13:39.662Z] =================================================================================================================== 00:14:06.050 [2024-12-14T01:13:39.662Z] Total : 34203.61 133.61 0.00 0.00 1867.50 582.89 6604.01 00:14:06.050 01:13:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:06.050 01:13:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:06.050 01:13:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:06.050 01:13:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:06.050 01:13:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:06.050 { 00:14:06.050 "subsystems": [ 00:14:06.050 { 00:14:06.050 "subsystem": "bdev", 00:14:06.050 "config": [ 00:14:06.050 { 00:14:06.050 "params": { 00:14:06.050 "io_mechanism": "io_uring_cmd", 00:14:06.050 "conserve_cpu": false, 00:14:06.050 "filename": "/dev/ng0n1", 00:14:06.050 "name": "xnvme_bdev" 00:14:06.050 }, 00:14:06.050 "method": "bdev_xnvme_create" 00:14:06.050 }, 00:14:06.050 { 00:14:06.050 "method": "bdev_wait_for_examine" 00:14:06.050 } 00:14:06.050 ] 00:14:06.050 } 00:14:06.050 ] 00:14:06.050 } 00:14:06.050 [2024-12-14 01:13:39.629797] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:06.050 [2024-12-14 01:13:39.630110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83859 ] 00:14:06.312 [2024-12-14 01:13:39.778554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.312 [2024-12-14 01:13:39.807608] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.312 Running I/O for 5 seconds... 00:14:08.645 33625.00 IOPS, 131.35 MiB/s [2024-12-14T01:13:43.203Z] 33497.50 IOPS, 130.85 MiB/s [2024-12-14T01:13:44.151Z] 33392.67 IOPS, 130.44 MiB/s [2024-12-14T01:13:45.161Z] 33375.50 IOPS, 130.37 MiB/s [2024-12-14T01:13:45.161Z] 33189.40 IOPS, 129.65 MiB/s 00:14:11.549 Latency(us) 00:14:11.549 [2024-12-14T01:13:45.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.550 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:11.550 xnvme_bdev : 5.00 33172.41 129.58 0.00 0.00 1925.33 371.79 9275.86 00:14:11.550 [2024-12-14T01:13:45.162Z] =================================================================================================================== 00:14:11.550 [2024-12-14T01:13:45.162Z] Total : 33172.41 129.58 0.00 0.00 1925.33 371.79 9275.86 00:14:11.550 01:13:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:11.550 01:13:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:11.550 01:13:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:11.550 01:13:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:11.550 01:13:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:11.550 { 00:14:11.550 "subsystems": [ 00:14:11.550 { 00:14:11.550 "subsystem": "bdev", 00:14:11.550 "config": [ 00:14:11.550 { 00:14:11.550 "params": { 00:14:11.550 "io_mechanism": "io_uring_cmd", 00:14:11.550 "conserve_cpu": false, 00:14:11.550 "filename": "/dev/ng0n1", 00:14:11.550 "name": "xnvme_bdev" 00:14:11.550 }, 00:14:11.550 "method": "bdev_xnvme_create" 00:14:11.550 }, 00:14:11.550 { 00:14:11.550 "method": "bdev_wait_for_examine" 00:14:11.550 } 00:14:11.550 ] 00:14:11.550 } 00:14:11.550 ] 00:14:11.550 } 00:14:11.810 [2024-12-14 01:13:45.179411] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:11.810 [2024-12-14 01:13:45.179779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83923 ] 00:14:11.810 [2024-12-14 01:13:45.323138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.810 [2024-12-14 01:13:45.352493] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.071 Running I/O for 5 seconds... 00:14:13.955 79360.00 IOPS, 310.00 MiB/s [2024-12-14T01:13:48.510Z] 79776.00 IOPS, 311.62 MiB/s [2024-12-14T01:13:49.894Z] 79744.00 IOPS, 311.50 MiB/s [2024-12-14T01:13:50.835Z] 79728.00 IOPS, 311.44 MiB/s [2024-12-14T01:13:50.835Z] 80396.80 IOPS, 314.05 MiB/s 00:14:17.223 Latency(us) 00:14:17.223 [2024-12-14T01:13:50.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.223 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:17.223 xnvme_bdev : 5.00 80356.15 313.89 0.00 0.00 793.10 516.73 2571.03 00:14:17.223 [2024-12-14T01:13:50.835Z] =================================================================================================================== 00:14:17.223 [2024-12-14T01:13:50.835Z] Total : 80356.15 313.89 0.00 0.00 793.10 516.73 2571.03 00:14:17.223 01:13:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:17.223 01:13:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:17.223 01:13:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:17.223 01:13:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:17.223 01:13:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:17.223 { 00:14:17.223 "subsystems": [ 00:14:17.223 { 00:14:17.223 "subsystem": "bdev", 00:14:17.223 "config": [ 00:14:17.223 { 00:14:17.223 "params": { 00:14:17.223 "io_mechanism": "io_uring_cmd", 00:14:17.223 "conserve_cpu": false, 00:14:17.223 "filename": "/dev/ng0n1", 00:14:17.223 "name": "xnvme_bdev" 00:14:17.223 }, 00:14:17.223 "method": "bdev_xnvme_create" 00:14:17.223 }, 00:14:17.223 { 00:14:17.223 "method": "bdev_wait_for_examine" 00:14:17.223 } 00:14:17.223 ] 00:14:17.223 } 00:14:17.223 ] 00:14:17.223 } 00:14:17.223 [2024-12-14 01:13:50.647148] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:17.223 [2024-12-14 01:13:50.647256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83992 ] 00:14:17.223 [2024-12-14 01:13:50.789263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.223 [2024-12-14 01:13:50.806261] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.484 Running I/O for 5 seconds... 00:14:19.365 48933.00 IOPS, 191.14 MiB/s [2024-12-14T01:13:53.920Z] 29991.00 IOPS, 117.15 MiB/s [2024-12-14T01:13:55.311Z] 20167.33 IOPS, 78.78 MiB/s [2024-12-14T01:13:55.883Z] 15251.00 IOPS, 59.57 MiB/s [2024-12-14T01:13:56.144Z] 12294.00 IOPS, 48.02 MiB/s 00:14:22.532 Latency(us) 00:14:22.532 [2024-12-14T01:13:56.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.532 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:22.532 xnvme_bdev : 5.14 11966.15 46.74 0.00 0.00 5267.46 64.20 385553.33 00:14:22.532 [2024-12-14T01:13:56.144Z] =================================================================================================================== 00:14:22.532 [2024-12-14T01:13:56.144Z] Total : 11966.15 46.74 0.00 0.00 5267.46 64.20 385553.33 00:14:22.794 00:14:22.794 real 0m22.113s 00:14:22.794 user 0m11.644s 00:14:22.794 sys 0m10.028s 00:14:22.794 01:13:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:22.794 ************************************ 00:14:22.794 END TEST xnvme_bdevperf 00:14:22.794 ************************************ 00:14:22.794 01:13:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:22.794 01:13:56 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:22.794 01:13:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:22.794 01:13:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:22.794 01:13:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:22.794 ************************************ 00:14:22.794 START TEST xnvme_fio_plugin 00:14:22.794 ************************************ 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:22.794 01:13:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:22.794 { 00:14:22.794 "subsystems": [ 00:14:22.794 { 00:14:22.794 "subsystem": "bdev", 00:14:22.794 "config": [ 00:14:22.794 { 00:14:22.794 "params": { 00:14:22.794 "io_mechanism": "io_uring_cmd", 00:14:22.794 "conserve_cpu": false, 00:14:22.794 "filename": "/dev/ng0n1", 00:14:22.794 "name": "xnvme_bdev" 00:14:22.794 }, 00:14:22.794 "method": "bdev_xnvme_create" 00:14:22.794 }, 00:14:22.794 { 00:14:22.794 "method": "bdev_wait_for_examine" 00:14:22.794 } 00:14:22.794 ] 00:14:22.794 } 00:14:22.794 ] 00:14:22.794 } 00:14:23.056 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:23.056 fio-3.35 00:14:23.056 Starting 1 thread 00:14:28.348 00:14:28.348 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84099: Sat Dec 14 01:14:01 2024 00:14:28.348 read: IOPS=35.4k, BW=138MiB/s (145MB/s)(692MiB/5001msec) 00:14:28.348 slat (usec): min=2, max=420, avg= 4.08, stdev= 2.45 00:14:28.348 clat (usec): min=418, max=3469, avg=1641.15, stdev=246.85 00:14:28.348 lat (usec): min=421, max=3504, avg=1645.23, stdev=247.34 00:14:28.348 clat percentiles (usec): 00:14:28.348 | 1.00th=[ 1205], 5.00th=[ 1319], 10.00th=[ 1369], 20.00th=[ 1434], 00:14:28.348 | 30.00th=[ 1500], 40.00th=[ 1549], 50.00th=[ 1598], 60.00th=[ 1663], 00:14:28.348 | 70.00th=[ 1729], 80.00th=[ 1827], 90.00th=[ 1975], 95.00th=[ 2089], 00:14:28.348 | 99.00th=[ 2376], 99.50th=[ 2507], 99.90th=[ 2802], 99.95th=[ 2933], 00:14:28.348 | 99.99th=[ 3261] 00:14:28.348 bw ( KiB/s): min=136704, max=144896, per=100.00%, avg=142108.44, stdev=2496.64, samples=9 00:14:28.348 iops : min=34176, max=36224, avg=35527.11, stdev=624.16, samples=9 00:14:28.348 lat (usec) : 500=0.01%, 1000=0.04% 00:14:28.348 lat (msec) : 2=91.28%, 4=8.67% 00:14:28.348 cpu : usr=34.18%, sys=64.36%, ctx=7, majf=0, minf=1063 00:14:28.348 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:28.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:28.348 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:28.348 issued rwts: total=177109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:28.348 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:28.348 00:14:28.348 Run status group 0 (all jobs): 00:14:28.348 READ: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=692MiB (725MB), run=5001-5001msec 00:14:28.608 ----------------------------------------------------- 00:14:28.608 Suppressions used: 00:14:28.608 count bytes template 00:14:28.608 1 11 /usr/src/fio/parse.c 00:14:28.608 1 8 libtcmalloc_minimal.so 00:14:28.608 1 904 libcrypto.so 00:14:28.608 ----------------------------------------------------- 00:14:28.608 00:14:28.869 01:14:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:28.869 01:14:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:28.869 01:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:28.869 01:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:28.869 01:14:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:28.869 01:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:28.869 01:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:28.869 01:14:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:28.869 01:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:28.869 01:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:28.869 01:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:28.869 01:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:28.869 01:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:28.869 01:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:28.869 01:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:28.869 01:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:28.869 01:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:28.869 01:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:28.869 01:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:28.869 01:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:28.869 01:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:28.869 { 00:14:28.869 "subsystems": [ 00:14:28.869 { 00:14:28.869 "subsystem": "bdev", 00:14:28.869 "config": [ 00:14:28.869 { 00:14:28.869 "params": { 00:14:28.869 "io_mechanism": "io_uring_cmd", 00:14:28.869 "conserve_cpu": false, 00:14:28.869 "filename": "/dev/ng0n1", 00:14:28.869 "name": "xnvme_bdev" 00:14:28.869 }, 00:14:28.869 "method": "bdev_xnvme_create" 00:14:28.869 }, 00:14:28.869 { 00:14:28.869 "method": "bdev_wait_for_examine" 00:14:28.869 } 00:14:28.869 ] 00:14:28.869 } 00:14:28.869 ] 00:14:28.869 } 00:14:28.869 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:28.869 fio-3.35 00:14:28.869 Starting 1 thread 00:14:35.460 00:14:35.460 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84178: Sat Dec 14 01:14:07 2024 00:14:35.460 write: IOPS=35.7k, BW=140MiB/s (146MB/s)(698MiB/5001msec); 0 zone resets 00:14:35.460 slat (nsec): min=2925, max=93658, avg=4285.82, stdev=2310.01 00:14:35.460 clat (usec): min=279, max=4811, avg=1618.59, stdev=254.27 00:14:35.460 lat (usec): min=282, max=4827, avg=1622.87, stdev=254.76 00:14:35.460 clat percentiles (usec): 00:14:35.460 | 1.00th=[ 1156], 5.00th=[ 1270], 10.00th=[ 1336], 20.00th=[ 1418], 00:14:35.460 | 30.00th=[ 1483], 40.00th=[ 1532], 50.00th=[ 1582], 60.00th=[ 1647], 00:14:35.460 | 70.00th=[ 1713], 80.00th=[ 1811], 90.00th=[ 1926], 95.00th=[ 2057], 00:14:35.460 | 99.00th=[ 2376], 99.50th=[ 2507], 99.90th=[ 3163], 99.95th=[ 3425], 00:14:35.460 | 99.99th=[ 3752] 00:14:35.460 bw ( KiB/s): min=139800, max=148640, per=99.92%, avg=142828.89, stdev=2880.16, samples=9 00:14:35.460 iops : min=34950, max=37160, avg=35707.22, stdev=720.04, samples=9 00:14:35.460 lat (usec) : 500=0.03%, 750=0.06%, 1000=0.14% 00:14:35.460 lat (msec) : 2=92.96%, 4=6.80%, 10=0.01% 00:14:35.460 cpu : usr=35.26%, sys=63.20%, ctx=14, majf=0, minf=1064 00:14:35.460 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.3%, 16=24.8%, 32=50.7%, >=64=1.6% 00:14:35.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.460 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:35.460 issued rwts: total=0,178711,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.460 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:35.460 00:14:35.460 Run status group 0 (all jobs): 00:14:35.460 WRITE: bw=140MiB/s (146MB/s), 140MiB/s-140MiB/s (146MB/s-146MB/s), io=698MiB (732MB), run=5001-5001msec 00:14:35.460 ----------------------------------------------------- 00:14:35.460 Suppressions used: 00:14:35.460 count bytes template 00:14:35.460 1 11 /usr/src/fio/parse.c 00:14:35.460 1 8 libtcmalloc_minimal.so 00:14:35.460 1 904 libcrypto.so 00:14:35.460 ----------------------------------------------------- 00:14:35.460 00:14:35.460 00:14:35.460 real 0m11.955s 00:14:35.460 user 0m4.636s 00:14:35.460 sys 0m6.848s 00:14:35.460 01:14:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.460 01:14:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:35.460 ************************************ 00:14:35.460 END TEST xnvme_fio_plugin 00:14:35.460 ************************************ 00:14:35.460 01:14:08 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:35.460 01:14:08 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:35.460 01:14:08 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:35.460 01:14:08 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:35.460 01:14:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:35.460 01:14:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.460 01:14:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:35.460 ************************************ 00:14:35.460 START TEST xnvme_rpc 00:14:35.460 ************************************ 00:14:35.460 01:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:35.460 01:14:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:35.460 01:14:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:35.460 01:14:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:35.460 01:14:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:35.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.460 01:14:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=84253 00:14:35.460 01:14:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 84253 00:14:35.460 01:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 84253 ']' 00:14:35.460 01:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.460 01:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:35.460 01:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.460 01:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:35.460 01:14:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.460 01:14:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:35.460 [2024-12-14 01:14:08.366548] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:35.460 [2024-12-14 01:14:08.366698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84253 ] 00:14:35.460 [2024-12-14 01:14:08.511426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.460 [2024-12-14 01:14:08.540144] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.721 xnvme_bdev 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.721 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.983 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.983 01:14:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:35.983 01:14:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:35.983 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.983 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.983 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.983 01:14:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 84253 00:14:35.983 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 84253 ']' 00:14:35.983 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 84253 00:14:35.983 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:35.983 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:35.983 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84253 00:14:35.983 killing process with pid 84253 00:14:35.983 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:35.983 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:35.983 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84253' 00:14:35.983 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 84253 00:14:35.983 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 84253 00:14:36.244 ************************************ 00:14:36.244 END TEST xnvme_rpc 00:14:36.244 ************************************ 00:14:36.244 00:14:36.244 real 0m1.423s 00:14:36.244 user 0m1.486s 00:14:36.244 sys 0m0.423s 00:14:36.244 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:36.244 01:14:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:36.244 01:14:09 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:36.244 01:14:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:36.244 01:14:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:36.244 01:14:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:36.244 ************************************ 00:14:36.244 START TEST xnvme_bdevperf 00:14:36.244 ************************************ 00:14:36.244 01:14:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:36.244 01:14:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:36.244 01:14:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:36.244 01:14:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:36.244 01:14:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:36.244 01:14:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:36.244 01:14:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:36.244 01:14:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:36.244 { 00:14:36.244 "subsystems": [ 00:14:36.244 { 00:14:36.244 "subsystem": "bdev", 00:14:36.244 "config": [ 00:14:36.244 { 00:14:36.244 "params": { 00:14:36.244 "io_mechanism": "io_uring_cmd", 00:14:36.244 "conserve_cpu": true, 00:14:36.244 "filename": "/dev/ng0n1", 00:14:36.244 "name": "xnvme_bdev" 00:14:36.244 }, 00:14:36.244 "method": "bdev_xnvme_create" 00:14:36.244 }, 00:14:36.244 { 00:14:36.244 "method": "bdev_wait_for_examine" 00:14:36.244 } 00:14:36.244 ] 00:14:36.244 } 00:14:36.244 ] 00:14:36.244 } 00:14:36.244 [2024-12-14 01:14:09.836152] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:36.244 [2024-12-14 01:14:09.836442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84310 ] 00:14:36.505 [2024-12-14 01:14:09.982693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.505 [2024-12-14 01:14:10.012698] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.766 Running I/O for 5 seconds... 00:14:38.728 38976.00 IOPS, 152.25 MiB/s [2024-12-14T01:14:13.281Z] 37664.00 IOPS, 147.12 MiB/s [2024-12-14T01:14:14.220Z] 37141.33 IOPS, 145.08 MiB/s [2024-12-14T01:14:15.160Z] 36703.75 IOPS, 143.37 MiB/s 00:14:41.548 Latency(us) 00:14:41.548 [2024-12-14T01:14:15.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.548 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:41.548 xnvme_bdev : 5.00 36580.48 142.89 0.00 0.00 1745.21 894.82 4209.43 00:14:41.548 [2024-12-14T01:14:15.160Z] =================================================================================================================== 00:14:41.548 [2024-12-14T01:14:15.160Z] Total : 36580.48 142.89 0.00 0.00 1745.21 894.82 4209.43 00:14:41.807 01:14:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:41.807 01:14:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:41.807 01:14:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:41.807 01:14:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:41.807 01:14:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:41.807 { 00:14:41.807 "subsystems": [ 00:14:41.807 { 00:14:41.807 "subsystem": "bdev", 00:14:41.807 "config": [ 00:14:41.807 { 00:14:41.807 "params": { 00:14:41.807 "io_mechanism": "io_uring_cmd", 00:14:41.807 "conserve_cpu": true, 00:14:41.807 "filename": "/dev/ng0n1", 00:14:41.807 "name": "xnvme_bdev" 00:14:41.807 }, 00:14:41.807 "method": "bdev_xnvme_create" 00:14:41.807 }, 00:14:41.807 { 00:14:41.807 "method": "bdev_wait_for_examine" 00:14:41.807 } 00:14:41.807 ] 00:14:41.807 } 00:14:41.807 ] 00:14:41.807 } 00:14:41.807 [2024-12-14 01:14:15.366188] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:41.807 [2024-12-14 01:14:15.366518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84379 ] 00:14:42.066 [2024-12-14 01:14:15.515202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.066 [2024-12-14 01:14:15.544025] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.066 Running I/O for 5 seconds... 00:14:44.386 37071.00 IOPS, 144.81 MiB/s [2024-12-14T01:14:18.937Z] 37473.00 IOPS, 146.38 MiB/s [2024-12-14T01:14:19.878Z] 37349.00 IOPS, 145.89 MiB/s [2024-12-14T01:14:20.819Z] 32891.50 IOPS, 128.48 MiB/s [2024-12-14T01:14:20.819Z] 29031.00 IOPS, 113.40 MiB/s 00:14:47.207 Latency(us) 00:14:47.207 [2024-12-14T01:14:20.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.207 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:47.207 xnvme_bdev : 5.01 28990.81 113.25 0.00 0.00 2201.20 67.74 21576.47 00:14:47.207 [2024-12-14T01:14:20.819Z] =================================================================================================================== 00:14:47.207 [2024-12-14T01:14:20.819Z] Total : 28990.81 113.25 0.00 0.00 2201.20 67.74 21576.47 00:14:47.467 01:14:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:47.467 01:14:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:47.467 01:14:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:47.467 01:14:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:47.467 01:14:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:47.467 { 00:14:47.467 "subsystems": [ 00:14:47.467 { 00:14:47.467 "subsystem": "bdev", 00:14:47.467 "config": [ 00:14:47.467 { 00:14:47.467 "params": { 00:14:47.467 "io_mechanism": "io_uring_cmd", 00:14:47.467 "conserve_cpu": true, 00:14:47.467 "filename": "/dev/ng0n1", 00:14:47.467 "name": "xnvme_bdev" 00:14:47.467 }, 00:14:47.467 "method": "bdev_xnvme_create" 00:14:47.467 }, 00:14:47.467 { 00:14:47.467 "method": "bdev_wait_for_examine" 00:14:47.467 } 00:14:47.467 ] 00:14:47.467 } 00:14:47.467 ] 00:14:47.467 } 00:14:47.467 [2024-12-14 01:14:20.896912] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:47.467 [2024-12-14 01:14:20.897053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84442 ] 00:14:47.467 [2024-12-14 01:14:21.041493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.467 [2024-12-14 01:14:21.070067] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.727 Running I/O for 5 seconds... 00:14:49.605 79040.00 IOPS, 308.75 MiB/s [2024-12-14T01:14:24.598Z] 79168.00 IOPS, 309.25 MiB/s [2024-12-14T01:14:25.538Z] 79253.33 IOPS, 309.58 MiB/s [2024-12-14T01:14:26.478Z] 80544.00 IOPS, 314.62 MiB/s [2024-12-14T01:14:26.478Z] 83596.80 IOPS, 326.55 MiB/s 00:14:52.866 Latency(us) 00:14:52.866 [2024-12-14T01:14:26.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.866 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:52.866 xnvme_bdev : 5.00 83571.35 326.45 0.00 0.00 762.37 400.15 2659.25 00:14:52.866 [2024-12-14T01:14:26.478Z] =================================================================================================================== 00:14:52.866 [2024-12-14T01:14:26.478Z] Total : 83571.35 326.45 0.00 0.00 762.37 400.15 2659.25 00:14:52.866 01:14:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:52.866 01:14:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:52.866 01:14:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:52.866 01:14:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:52.866 01:14:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:52.866 { 00:14:52.866 "subsystems": [ 00:14:52.866 { 00:14:52.866 "subsystem": "bdev", 00:14:52.866 "config": [ 00:14:52.866 { 00:14:52.866 "params": { 00:14:52.866 "io_mechanism": "io_uring_cmd", 00:14:52.866 "conserve_cpu": true, 00:14:52.866 "filename": "/dev/ng0n1", 00:14:52.866 "name": "xnvme_bdev" 00:14:52.866 }, 00:14:52.866 "method": "bdev_xnvme_create" 00:14:52.866 }, 00:14:52.866 { 00:14:52.866 "method": "bdev_wait_for_examine" 00:14:52.866 } 00:14:52.866 ] 00:14:52.866 } 00:14:52.866 ] 00:14:52.866 } 00:14:52.866 [2024-12-14 01:14:26.355558] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:52.866 [2024-12-14 01:14:26.355676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84510 ] 00:14:53.127 [2024-12-14 01:14:26.498882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.127 [2024-12-14 01:14:26.519746] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.127 Running I/O for 5 seconds... 00:14:55.012 55993.00 IOPS, 218.72 MiB/s [2024-12-14T01:14:30.003Z] 51652.00 IOPS, 201.77 MiB/s [2024-12-14T01:14:30.945Z] 47853.33 IOPS, 186.93 MiB/s [2024-12-14T01:14:31.888Z] 45593.25 IOPS, 178.10 MiB/s [2024-12-14T01:14:31.888Z] 44272.00 IOPS, 172.94 MiB/s 00:14:58.276 Latency(us) 00:14:58.276 [2024-12-14T01:14:31.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.276 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:58.276 xnvme_bdev : 5.00 44249.26 172.85 0.00 0.00 1441.06 148.09 20467.40 00:14:58.276 [2024-12-14T01:14:31.888Z] =================================================================================================================== 00:14:58.276 [2024-12-14T01:14:31.888Z] Total : 44249.26 172.85 0.00 0.00 1441.06 148.09 20467.40 00:14:58.276 00:14:58.276 real 0m22.016s 00:14:58.276 user 0m12.610s 00:14:58.276 sys 0m7.031s 00:14:58.276 ************************************ 00:14:58.276 END TEST xnvme_bdevperf 00:14:58.276 ************************************ 00:14:58.276 01:14:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.276 01:14:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:58.276 01:14:31 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:58.276 01:14:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:58.276 01:14:31 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.276 01:14:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:58.276 ************************************ 00:14:58.276 START TEST xnvme_fio_plugin 00:14:58.276 ************************************ 00:14:58.276 01:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:58.276 01:14:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:58.276 01:14:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:58.276 01:14:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:58.276 01:14:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:58.276 01:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:58.276 01:14:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:58.276 01:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:58.276 01:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:58.276 01:14:31 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:58.276 01:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:58.276 01:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:58.276 01:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:58.276 01:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:58.276 01:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:58.276 01:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:58.276 01:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:58.276 01:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:58.276 01:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:58.537 01:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:58.537 01:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:58.537 01:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:58.537 01:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:58.537 01:14:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:58.537 { 00:14:58.537 "subsystems": [ 00:14:58.537 { 00:14:58.537 "subsystem": "bdev", 00:14:58.537 "config": [ 00:14:58.537 { 00:14:58.537 "params": { 00:14:58.537 "io_mechanism": "io_uring_cmd", 00:14:58.537 "conserve_cpu": true, 00:14:58.537 "filename": "/dev/ng0n1", 00:14:58.537 "name": "xnvme_bdev" 00:14:58.537 }, 00:14:58.537 "method": "bdev_xnvme_create" 00:14:58.537 }, 00:14:58.537 { 00:14:58.537 "method": "bdev_wait_for_examine" 00:14:58.537 } 00:14:58.537 ] 00:14:58.537 } 00:14:58.537 ] 00:14:58.537 } 00:14:58.537 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:58.537 fio-3.35 00:14:58.537 Starting 1 thread 00:15:05.127 00:15:05.127 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84607: Sat Dec 14 01:14:37 2024 00:15:05.127 read: IOPS=38.9k, BW=152MiB/s (159MB/s)(759MiB/5001msec) 00:15:05.127 slat (nsec): min=2896, max=70905, avg=3672.03, stdev=1814.71 00:15:05.127 clat (usec): min=873, max=3592, avg=1499.76, stdev=267.24 00:15:05.127 lat (usec): min=876, max=3623, avg=1503.43, stdev=267.75 00:15:05.127 clat percentiles (usec): 00:15:05.127 | 1.00th=[ 1012], 5.00th=[ 1106], 10.00th=[ 1156], 20.00th=[ 1254], 00:15:05.127 | 30.00th=[ 1352], 40.00th=[ 1434], 50.00th=[ 1500], 60.00th=[ 1549], 00:15:05.127 | 70.00th=[ 1631], 80.00th=[ 1713], 90.00th=[ 1844], 95.00th=[ 1958], 00:15:05.127 | 99.00th=[ 2212], 99.50th=[ 2311], 99.90th=[ 2573], 99.95th=[ 2802], 00:15:05.127 | 99.99th=[ 3392] 00:15:05.127 bw ( KiB/s): min=141029, max=187392, per=100.00%, avg=156356.11, stdev=19187.23, samples=9 00:15:05.127 iops : min=35257, max=46848, avg=39089.00, stdev=4796.83, samples=9 00:15:05.127 lat (usec) : 1000=0.74% 00:15:05.127 lat (msec) : 2=95.25%, 4=4.02% 00:15:05.127 cpu : usr=57.24%, sys=39.62%, ctx=12, majf=0, minf=1063 00:15:05.127 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:05.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.127 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:05.127 issued rwts: total=194368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.127 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:05.127 00:15:05.127 Run status group 0 (all jobs): 00:15:05.127 READ: bw=152MiB/s (159MB/s), 152MiB/s-152MiB/s (159MB/s-159MB/s), io=759MiB (796MB), run=5001-5001msec 00:15:05.127 ----------------------------------------------------- 00:15:05.127 Suppressions used: 00:15:05.127 count bytes template 00:15:05.127 1 11 /usr/src/fio/parse.c 00:15:05.127 1 8 libtcmalloc_minimal.so 00:15:05.127 1 904 libcrypto.so 00:15:05.127 ----------------------------------------------------- 00:15:05.127 00:15:05.127 01:14:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:05.127 01:14:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:05.127 01:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:05.127 01:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:05.127 01:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:05.127 01:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:05.127 01:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:05.127 01:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:05.127 01:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:05.127 01:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:05.127 01:14:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:05.127 01:14:37 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:05.127 01:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:05.127 01:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:05.127 01:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:05.127 01:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:05.127 01:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:05.127 01:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:05.127 01:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:05.127 01:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:05.128 01:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:05.128 { 00:15:05.128 "subsystems": [ 00:15:05.128 { 00:15:05.128 "subsystem": "bdev", 00:15:05.128 "config": [ 00:15:05.128 { 00:15:05.128 "params": { 00:15:05.128 "io_mechanism": "io_uring_cmd", 00:15:05.128 "conserve_cpu": true, 00:15:05.128 "filename": "/dev/ng0n1", 00:15:05.128 "name": "xnvme_bdev" 00:15:05.128 }, 00:15:05.128 "method": "bdev_xnvme_create" 00:15:05.128 }, 00:15:05.128 { 00:15:05.128 "method": "bdev_wait_for_examine" 00:15:05.128 } 00:15:05.128 ] 00:15:05.128 } 00:15:05.128 ] 00:15:05.128 } 00:15:05.128 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:05.128 fio-3.35 00:15:05.128 Starting 1 thread 00:15:10.421 00:15:10.421 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84692: Sat Dec 14 01:14:43 2024 00:15:10.421 write: IOPS=38.0k, BW=149MiB/s (156MB/s)(743MiB/5001msec); 0 zone resets 00:15:10.421 slat (usec): min=2, max=273, avg= 4.28, stdev= 2.43 00:15:10.421 clat (usec): min=386, max=5537, avg=1511.15, stdev=252.72 00:15:10.421 lat (usec): min=391, max=5541, avg=1515.43, stdev=253.27 00:15:10.421 clat percentiles (usec): 00:15:10.421 | 1.00th=[ 1074], 5.00th=[ 1156], 10.00th=[ 1221], 20.00th=[ 1303], 00:15:10.421 | 30.00th=[ 1369], 40.00th=[ 1434], 50.00th=[ 1483], 60.00th=[ 1549], 00:15:10.421 | 70.00th=[ 1598], 80.00th=[ 1680], 90.00th=[ 1811], 95.00th=[ 1942], 00:15:10.421 | 99.00th=[ 2245], 99.50th=[ 2474], 99.90th=[ 3130], 99.95th=[ 3490], 00:15:10.421 | 99.99th=[ 4424] 00:15:10.421 bw ( KiB/s): min=145768, max=155960, per=98.70%, avg=150209.78, stdev=3394.44, samples=9 00:15:10.421 iops : min=36442, max=38990, avg=37552.44, stdev=848.61, samples=9 00:15:10.421 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.22% 00:15:10.421 lat (msec) : 2=96.19%, 4=3.57%, 10=0.02% 00:15:10.421 cpu : usr=46.16%, sys=49.08%, ctx=9, majf=0, minf=1064 00:15:10.421 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.3%, >=64=1.6% 00:15:10.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.421 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:10.421 issued rwts: total=0,190268,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.421 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:10.421 00:15:10.421 Run status group 0 (all jobs): 00:15:10.421 WRITE: bw=149MiB/s (156MB/s), 149MiB/s-149MiB/s (156MB/s-156MB/s), io=743MiB (779MB), run=5001-5001msec 00:15:10.421 ----------------------------------------------------- 00:15:10.421 Suppressions used: 00:15:10.421 count bytes template 00:15:10.421 1 11 /usr/src/fio/parse.c 00:15:10.421 1 8 libtcmalloc_minimal.so 00:15:10.421 1 904 libcrypto.so 00:15:10.421 ----------------------------------------------------- 00:15:10.421 00:15:10.421 ************************************ 00:15:10.421 00:15:10.421 real 0m11.993s 00:15:10.421 user 0m6.299s 00:15:10.421 sys 0m4.986s 00:15:10.421 01:14:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:10.421 01:14:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:10.421 END TEST xnvme_fio_plugin 00:15:10.421 ************************************ 00:15:10.421 Process with pid 84253 is not found 00:15:10.421 01:14:43 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 84253 00:15:10.421 01:14:43 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 84253 ']' 00:15:10.421 01:14:43 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 84253 00:15:10.421 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84253) - No such process 00:15:10.421 01:14:43 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 84253 is not found' 00:15:10.421 ************************************ 00:15:10.421 END TEST nvme_xnvme 00:15:10.421 ************************************ 00:15:10.421 00:15:10.421 real 2m56.667s 00:15:10.421 user 1m27.769s 00:15:10.421 sys 1m14.767s 00:15:10.421 01:14:43 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:10.421 01:14:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:10.421 01:14:43 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:10.421 01:14:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:10.421 01:14:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:10.421 01:14:43 -- common/autotest_common.sh@10 -- # set +x 00:15:10.421 ************************************ 00:15:10.421 START TEST blockdev_xnvme 00:15:10.421 ************************************ 00:15:10.421 01:14:43 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:10.682 * Looking for test storage... 00:15:10.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:10.682 01:14:44 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:10.682 01:14:44 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:10.682 01:14:44 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:10.682 01:14:44 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:10.682 01:14:44 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:10.682 01:14:44 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:10.682 01:14:44 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:10.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.682 --rc genhtml_branch_coverage=1 00:15:10.682 --rc genhtml_function_coverage=1 00:15:10.682 --rc genhtml_legend=1 00:15:10.682 --rc geninfo_all_blocks=1 00:15:10.682 --rc geninfo_unexecuted_blocks=1 00:15:10.682 00:15:10.682 ' 00:15:10.682 01:14:44 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:10.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.682 --rc genhtml_branch_coverage=1 00:15:10.682 --rc genhtml_function_coverage=1 00:15:10.682 --rc genhtml_legend=1 00:15:10.682 --rc geninfo_all_blocks=1 00:15:10.682 --rc geninfo_unexecuted_blocks=1 00:15:10.682 00:15:10.682 ' 00:15:10.682 01:14:44 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:10.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.682 --rc genhtml_branch_coverage=1 00:15:10.682 --rc genhtml_function_coverage=1 00:15:10.682 --rc genhtml_legend=1 00:15:10.682 --rc geninfo_all_blocks=1 00:15:10.682 --rc geninfo_unexecuted_blocks=1 00:15:10.682 00:15:10.682 ' 00:15:10.682 01:14:44 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:10.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.682 --rc genhtml_branch_coverage=1 00:15:10.682 --rc genhtml_function_coverage=1 00:15:10.682 --rc genhtml_legend=1 00:15:10.682 --rc geninfo_all_blocks=1 00:15:10.682 --rc geninfo_unexecuted_blocks=1 00:15:10.682 00:15:10.682 ' 00:15:10.682 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:10.682 01:14:44 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:10.682 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:10.682 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:10.682 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:10.682 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:10.682 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:10.682 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:10.682 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:10.682 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:10.682 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:10.682 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:10.682 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:10.683 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:10.683 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:10.683 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:10.683 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:10.683 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:10.683 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:10.683 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:10.683 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:10.683 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:10.683 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:10.683 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:10.683 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=84821 00:15:10.683 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:10.683 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 84821 00:15:10.683 01:14:44 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 84821 ']' 00:15:10.683 01:14:44 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:10.683 01:14:44 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.683 01:14:44 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:10.683 01:14:44 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.683 01:14:44 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:10.683 01:14:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:10.683 [2024-12-14 01:14:44.218297] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:10.683 [2024-12-14 01:14:44.218712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84821 ] 00:15:10.945 [2024-12-14 01:14:44.358031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.945 [2024-12-14 01:14:44.387416] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.577 01:14:45 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:11.577 01:14:45 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:11.577 01:14:45 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:11.577 01:14:45 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:11.577 01:14:45 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:11.577 01:14:45 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:11.577 01:14:45 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:12.151 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:12.724 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:12.724 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:12.724 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:12.724 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:12.724 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:12.724 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:12.724 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:12.724 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:12.724 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:12.724 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:12.724 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:12.724 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:15:12.724 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:12.724 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:12.724 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:12.724 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:12.724 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:12.725 nvme0n1 00:15:12.725 nvme0n2 00:15:12.725 nvme0n3 00:15:12.725 nvme1n1 00:15:12.725 nvme2n1 00:15:12.725 nvme3n1 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.725 01:14:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:12.725 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:12.726 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "9f3eaf5b-c02a-4ea7-90e1-ba93a644b96e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9f3eaf5b-c02a-4ea7-90e1-ba93a644b96e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "4c7adcbb-e6c9-4e00-a126-4caf38def00a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4c7adcbb-e6c9-4e00-a126-4caf38def00a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "ed24434f-8681-4dbf-875d-45faa25d076d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ed24434f-8681-4dbf-875d-45faa25d076d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "8110db45-694c-468d-bc1e-3a4acaed3a0d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8110db45-694c-468d-bc1e-3a4acaed3a0d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "0a47357b-8bbf-4fa9-a92a-14474e8eb696"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0a47357b-8bbf-4fa9-a92a-14474e8eb696",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "f8e5677f-fbaa-4772-aae8-b2ab3d0dc1cb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f8e5677f-fbaa-4772-aae8-b2ab3d0dc1cb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:12.987 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:12.987 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:12.987 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:12.987 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 84821 00:15:12.987 01:14:46 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 84821 ']' 00:15:12.987 01:14:46 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 84821 00:15:12.987 01:14:46 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:12.987 01:14:46 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.987 01:14:46 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84821 00:15:12.987 killing process with pid 84821 00:15:12.987 01:14:46 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:12.987 01:14:46 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:12.987 01:14:46 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84821' 00:15:12.987 01:14:46 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 84821 00:15:12.987 01:14:46 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 84821 00:15:13.561 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:13.561 01:14:46 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:13.561 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:13.561 01:14:46 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.561 01:14:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.561 ************************************ 00:15:13.561 START TEST bdev_hello_world 00:15:13.561 ************************************ 00:15:13.561 01:14:46 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:13.561 [2024-12-14 01:14:46.977989] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:13.561 [2024-12-14 01:14:46.978142] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85094 ] 00:15:13.561 [2024-12-14 01:14:47.119860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.561 [2024-12-14 01:14:47.160294] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.822 [2024-12-14 01:14:47.424058] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:13.822 [2024-12-14 01:14:47.424126] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:13.822 [2024-12-14 01:14:47.424153] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:13.822 [2024-12-14 01:14:47.426579] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:13.822 [2024-12-14 01:14:47.427085] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:13.822 [2024-12-14 01:14:47.427123] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:13.822 [2024-12-14 01:14:47.427725] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:13.822 00:15:13.822 [2024-12-14 01:14:47.427764] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:14.084 00:15:14.084 real 0m0.779s 00:15:14.084 user 0m0.391s 00:15:14.084 sys 0m0.241s 00:15:14.084 01:14:47 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.084 ************************************ 00:15:14.084 END TEST bdev_hello_world 00:15:14.084 ************************************ 00:15:14.084 01:14:47 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:14.345 01:14:47 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:14.345 01:14:47 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:14.345 01:14:47 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.345 01:14:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:14.345 ************************************ 00:15:14.345 START TEST bdev_bounds 00:15:14.345 ************************************ 00:15:14.345 01:14:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:14.345 01:14:47 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=85115 00:15:14.345 Process bdevio pid: 85115 00:15:14.345 01:14:47 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:14.345 01:14:47 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 85115' 00:15:14.345 01:14:47 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 85115 00:15:14.345 01:14:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 85115 ']' 00:15:14.345 01:14:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.345 01:14:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.345 01:14:47 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:14.345 01:14:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.345 01:14:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.345 01:14:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:14.345 [2024-12-14 01:14:47.829157] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:14.345 [2024-12-14 01:14:47.829310] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85115 ] 00:15:14.607 [2024-12-14 01:14:47.973256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:14.607 [2024-12-14 01:14:48.016826] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.607 [2024-12-14 01:14:48.017480] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:14.607 [2024-12-14 01:14:48.017576] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.180 01:14:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:15.180 01:14:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:15.180 01:14:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:15.442 I/O targets: 00:15:15.442 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:15.442 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:15.442 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:15.442 nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:15.442 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:15.442 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:15.442 00:15:15.442 00:15:15.442 CUnit - A unit testing framework for C - Version 2.1-3 00:15:15.442 http://cunit.sourceforge.net/ 00:15:15.442 00:15:15.442 00:15:15.442 Suite: bdevio tests on: nvme3n1 00:15:15.442 Test: blockdev write read block ...passed 00:15:15.442 Test: blockdev write zeroes read block ...passed 00:15:15.442 Test: blockdev write zeroes read no split ...passed 00:15:15.442 Test: blockdev write zeroes read split ...passed 00:15:15.442 Test: blockdev write zeroes read split partial ...passed 00:15:15.442 Test: blockdev reset ...passed 00:15:15.442 Test: blockdev write read 8 blocks ...passed 00:15:15.442 Test: blockdev write read size > 128k ...passed 00:15:15.442 Test: blockdev write read invalid size ...passed 00:15:15.442 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:15.442 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:15.442 Test: blockdev write read max offset ...passed 00:15:15.442 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:15.442 Test: blockdev writev readv 8 blocks ...passed 00:15:15.442 Test: blockdev writev readv 30 x 1block ...passed 00:15:15.442 Test: blockdev writev readv block ...passed 00:15:15.442 Test: blockdev writev readv size > 128k ...passed 00:15:15.442 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:15.442 Test: blockdev comparev and writev ...passed 00:15:15.442 Test: blockdev nvme passthru rw ...passed 00:15:15.442 Test: blockdev nvme passthru vendor specific ...passed 00:15:15.442 Test: blockdev nvme admin passthru ...passed 00:15:15.442 Test: blockdev copy ...passed 00:15:15.442 Suite: bdevio tests on: nvme2n1 00:15:15.442 Test: blockdev write read block ...passed 00:15:15.442 Test: blockdev write zeroes read block ...passed 00:15:15.442 Test: blockdev write zeroes read no split ...passed 00:15:15.442 Test: blockdev write zeroes read split ...passed 00:15:15.442 Test: blockdev write zeroes read split partial ...passed 00:15:15.442 Test: blockdev reset ...passed 00:15:15.442 Test: blockdev write read 8 blocks ...passed 00:15:15.442 Test: blockdev write read size > 128k ...passed 00:15:15.442 Test: blockdev write read invalid size ...passed 00:15:15.442 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:15.442 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:15.442 Test: blockdev write read max offset ...passed 00:15:15.442 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:15.442 Test: blockdev writev readv 8 blocks ...passed 00:15:15.442 Test: blockdev writev readv 30 x 1block ...passed 00:15:15.442 Test: blockdev writev readv block ...passed 00:15:15.442 Test: blockdev writev readv size > 128k ...passed 00:15:15.443 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:15.443 Test: blockdev comparev and writev ...passed 00:15:15.443 Test: blockdev nvme passthru rw ...passed 00:15:15.443 Test: blockdev nvme passthru vendor specific ...passed 00:15:15.443 Test: blockdev nvme admin passthru ...passed 00:15:15.443 Test: blockdev copy ...passed 00:15:15.443 Suite: bdevio tests on: nvme1n1 00:15:15.443 Test: blockdev write read block ...passed 00:15:15.443 Test: blockdev write zeroes read block ...passed 00:15:15.443 Test: blockdev write zeroes read no split ...passed 00:15:15.443 Test: blockdev write zeroes read split ...passed 00:15:15.443 Test: blockdev write zeroes read split partial ...passed 00:15:15.443 Test: blockdev reset ...passed 00:15:15.443 Test: blockdev write read 8 blocks ...passed 00:15:15.443 Test: blockdev write read size > 128k ...passed 00:15:15.443 Test: blockdev write read invalid size ...passed 00:15:15.443 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:15.443 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:15.443 Test: blockdev write read max offset ...passed 00:15:15.443 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:15.443 Test: blockdev writev readv 8 blocks ...passed 00:15:15.443 Test: blockdev writev readv 30 x 1block ...passed 00:15:15.443 Test: blockdev writev readv block ...passed 00:15:15.443 Test: blockdev writev readv size > 128k ...passed 00:15:15.443 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:15.443 Test: blockdev comparev and writev ...passed 00:15:15.443 Test: blockdev nvme passthru rw ...passed 00:15:15.443 Test: blockdev nvme passthru vendor specific ...passed 00:15:15.443 Test: blockdev nvme admin passthru ...passed 00:15:15.443 Test: blockdev copy ...passed 00:15:15.443 Suite: bdevio tests on: nvme0n3 00:15:15.443 Test: blockdev write read block ...passed 00:15:15.443 Test: blockdev write zeroes read block ...passed 00:15:15.443 Test: blockdev write zeroes read no split ...passed 00:15:15.443 Test: blockdev write zeroes read split ...passed 00:15:15.443 Test: blockdev write zeroes read split partial ...passed 00:15:15.443 Test: blockdev reset ...passed 00:15:15.443 Test: blockdev write read 8 blocks ...passed 00:15:15.443 Test: blockdev write read size > 128k ...passed 00:15:15.443 Test: blockdev write read invalid size ...passed 00:15:15.443 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:15.443 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:15.443 Test: blockdev write read max offset ...passed 00:15:15.443 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:15.443 Test: blockdev writev readv 8 blocks ...passed 00:15:15.443 Test: blockdev writev readv 30 x 1block ...passed 00:15:15.443 Test: blockdev writev readv block ...passed 00:15:15.443 Test: blockdev writev readv size > 128k ...passed 00:15:15.443 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:15.443 Test: blockdev comparev and writev ...passed 00:15:15.443 Test: blockdev nvme passthru rw ...passed 00:15:15.443 Test: blockdev nvme passthru vendor specific ...passed 00:15:15.443 Test: blockdev nvme admin passthru ...passed 00:15:15.443 Test: blockdev copy ...passed 00:15:15.443 Suite: bdevio tests on: nvme0n2 00:15:15.443 Test: blockdev write read block ...passed 00:15:15.443 Test: blockdev write zeroes read block ...passed 00:15:15.443 Test: blockdev write zeroes read no split ...passed 00:15:15.443 Test: blockdev write zeroes read split ...passed 00:15:15.443 Test: blockdev write zeroes read split partial ...passed 00:15:15.443 Test: blockdev reset ...passed 00:15:15.443 Test: blockdev write read 8 blocks ...passed 00:15:15.443 Test: blockdev write read size > 128k ...passed 00:15:15.443 Test: blockdev write read invalid size ...passed 00:15:15.443 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:15.443 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:15.443 Test: blockdev write read max offset ...passed 00:15:15.443 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:15.443 Test: blockdev writev readv 8 blocks ...passed 00:15:15.443 Test: blockdev writev readv 30 x 1block ...passed 00:15:15.443 Test: blockdev writev readv block ...passed 00:15:15.443 Test: blockdev writev readv size > 128k ...passed 00:15:15.443 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:15.443 Test: blockdev comparev and writev ...passed 00:15:15.443 Test: blockdev nvme passthru rw ...passed 00:15:15.443 Test: blockdev nvme passthru vendor specific ...passed 00:15:15.443 Test: blockdev nvme admin passthru ...passed 00:15:15.443 Test: blockdev copy ...passed 00:15:15.443 Suite: bdevio tests on: nvme0n1 00:15:15.443 Test: blockdev write read block ...passed 00:15:15.443 Test: blockdev write zeroes read block ...passed 00:15:15.443 Test: blockdev write zeroes read no split ...passed 00:15:15.705 Test: blockdev write zeroes read split ...passed 00:15:15.705 Test: blockdev write zeroes read split partial ...passed 00:15:15.705 Test: blockdev reset ...passed 00:15:15.705 Test: blockdev write read 8 blocks ...passed 00:15:15.705 Test: blockdev write read size > 128k ...passed 00:15:15.705 Test: blockdev write read invalid size ...passed 00:15:15.705 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:15.705 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:15.705 Test: blockdev write read max offset ...passed 00:15:15.705 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:15.705 Test: blockdev writev readv 8 blocks ...passed 00:15:15.705 Test: blockdev writev readv 30 x 1block ...passed 00:15:15.705 Test: blockdev writev readv block ...passed 00:15:15.705 Test: blockdev writev readv size > 128k ...passed 00:15:15.705 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:15.705 Test: blockdev comparev and writev ...passed 00:15:15.705 Test: blockdev nvme passthru rw ...passed 00:15:15.705 Test: blockdev nvme passthru vendor specific ...passed 00:15:15.705 Test: blockdev nvme admin passthru ...passed 00:15:15.705 Test: blockdev copy ...passed 00:15:15.705 00:15:15.705 Run Summary: Type Total Ran Passed Failed Inactive 00:15:15.705 suites 6 6 n/a 0 0 00:15:15.705 tests 138 138 138 0 0 00:15:15.705 asserts 780 780 780 0 n/a 00:15:15.705 00:15:15.705 Elapsed time = 0.594 seconds 00:15:15.705 0 00:15:15.705 01:14:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 85115 00:15:15.705 01:14:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 85115 ']' 00:15:15.705 01:14:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 85115 00:15:15.705 01:14:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:15.705 01:14:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:15.705 01:14:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85115 00:15:15.705 01:14:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:15.705 01:14:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:15.705 killing process with pid 85115 00:15:15.705 01:14:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85115' 00:15:15.705 01:14:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 85115 00:15:15.705 01:14:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 85115 00:15:15.967 01:14:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:15.967 00:15:15.967 real 0m1.633s 00:15:15.967 user 0m3.962s 00:15:15.967 sys 0m0.393s 00:15:15.967 01:14:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.967 01:14:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:15.967 ************************************ 00:15:15.967 END TEST bdev_bounds 00:15:15.967 ************************************ 00:15:15.967 01:14:49 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:15.967 01:14:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:15.967 01:14:49 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.967 01:14:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:15.967 ************************************ 00:15:15.967 START TEST bdev_nbd 00:15:15.967 ************************************ 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:15.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=85170 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 85170 /var/tmp/spdk-nbd.sock 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 85170 ']' 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:15.967 01:14:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:15.967 [2024-12-14 01:14:49.546658] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:15.967 [2024-12-14 01:14:49.546807] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.228 [2024-12-14 01:14:49.695859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.228 [2024-12-14 01:14:49.736061] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.801 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.801 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:16.801 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:16.801 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:16.801 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:16.801 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:16.801 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:16.801 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:16.801 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:16.801 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:16.801 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:16.801 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:16.801 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:16.801 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:16.801 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:17.062 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:17.062 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:17.062 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:17.062 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:17.062 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:17.062 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:17.062 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:17.062 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:17.062 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:17.062 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:17.062 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:17.062 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:17.062 1+0 records in 00:15:17.062 1+0 records out 00:15:17.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000996435 s, 4.1 MB/s 00:15:17.062 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.062 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:17.062 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.062 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:17.062 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:17.062 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:17.062 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:17.062 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:17.323 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:17.323 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:17.323 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:17.323 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:17.323 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:17.323 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:17.323 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:17.323 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:17.323 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:17.323 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:17.323 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:17.323 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:17.323 1+0 records in 00:15:17.323 1+0 records out 00:15:17.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.001177 s, 3.5 MB/s 00:15:17.323 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.323 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:17.323 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.323 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:17.323 01:14:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:17.323 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:17.323 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:17.323 01:14:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:17.584 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:17.584 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:17.584 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:17.584 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:17.584 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:17.584 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:17.584 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:17.584 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:17.584 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:17.584 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:17.584 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:17.584 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:17.584 1+0 records in 00:15:17.584 1+0 records out 00:15:17.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104315 s, 3.9 MB/s 00:15:17.585 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.585 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:17.585 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.585 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:17.585 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:17.585 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:17.585 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:17.585 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:17.846 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:17.846 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:17.846 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:17.846 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:17.846 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:17.846 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:17.846 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:17.846 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:17.846 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:17.846 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:17.846 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:17.846 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:17.846 1+0 records in 00:15:17.846 1+0 records out 00:15:17.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101871 s, 4.0 MB/s 00:15:17.846 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.846 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:17.846 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.846 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:17.846 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:17.846 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:17.846 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:17.846 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:18.107 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:18.107 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:18.107 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:18.107 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:18.107 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:18.107 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:18.107 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:18.107 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:18.107 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:18.107 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:18.107 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:18.107 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.107 1+0 records in 00:15:18.107 1+0 records out 00:15:18.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00139449 s, 2.9 MB/s 00:15:18.107 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.107 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:18.107 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.107 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:18.107 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:18.107 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:18.107 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:18.107 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:18.368 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:18.368 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:18.368 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:18.368 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:18.368 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:18.368 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:18.368 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:18.368 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:18.368 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:18.368 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:18.368 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:18.368 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.368 1+0 records in 00:15:18.368 1+0 records out 00:15:18.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000968228 s, 4.2 MB/s 00:15:18.368 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.368 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:18.368 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.368 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:18.368 01:14:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:18.368 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:18.368 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:18.368 01:14:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:18.630 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:18.630 { 00:15:18.630 "nbd_device": "/dev/nbd0", 00:15:18.630 "bdev_name": "nvme0n1" 00:15:18.630 }, 00:15:18.630 { 00:15:18.630 "nbd_device": "/dev/nbd1", 00:15:18.630 "bdev_name": "nvme0n2" 00:15:18.630 }, 00:15:18.630 { 00:15:18.630 "nbd_device": "/dev/nbd2", 00:15:18.630 "bdev_name": "nvme0n3" 00:15:18.630 }, 00:15:18.630 { 00:15:18.630 "nbd_device": "/dev/nbd3", 00:15:18.630 "bdev_name": "nvme1n1" 00:15:18.630 }, 00:15:18.630 { 00:15:18.630 "nbd_device": "/dev/nbd4", 00:15:18.630 "bdev_name": "nvme2n1" 00:15:18.630 }, 00:15:18.630 { 00:15:18.630 "nbd_device": "/dev/nbd5", 00:15:18.630 "bdev_name": "nvme3n1" 00:15:18.630 } 00:15:18.630 ]' 00:15:18.630 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:18.630 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:18.630 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:18.630 { 00:15:18.630 "nbd_device": "/dev/nbd0", 00:15:18.630 "bdev_name": "nvme0n1" 00:15:18.630 }, 00:15:18.630 { 00:15:18.630 "nbd_device": "/dev/nbd1", 00:15:18.630 "bdev_name": "nvme0n2" 00:15:18.630 }, 00:15:18.630 { 00:15:18.630 "nbd_device": "/dev/nbd2", 00:15:18.630 "bdev_name": "nvme0n3" 00:15:18.630 }, 00:15:18.630 { 00:15:18.630 "nbd_device": "/dev/nbd3", 00:15:18.630 "bdev_name": "nvme1n1" 00:15:18.630 }, 00:15:18.630 { 00:15:18.630 "nbd_device": "/dev/nbd4", 00:15:18.630 "bdev_name": "nvme2n1" 00:15:18.630 }, 00:15:18.630 { 00:15:18.630 "nbd_device": "/dev/nbd5", 00:15:18.630 "bdev_name": "nvme3n1" 00:15:18.630 } 00:15:18.630 ]' 00:15:18.630 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:18.630 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:18.630 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:18.630 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:18.630 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:18.630 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.630 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:18.891 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:18.891 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:18.891 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:18.891 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:18.891 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:18.891 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:18.891 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:18.891 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:18.891 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.891 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:19.152 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:19.152 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:19.152 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:19.152 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.152 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.152 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:19.152 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:19.152 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.152 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.152 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:19.414 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:19.414 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:19.414 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:19.414 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.414 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.414 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:19.414 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:19.414 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.414 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.414 01:14:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:19.676 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:19.676 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:19.676 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:19.676 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.676 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.676 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:19.676 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:19.676 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.676 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.676 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:19.938 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:19.938 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:19.938 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:19.938 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.938 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.938 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:19.938 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:19.938 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.938 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.938 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:19.938 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:19.938 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:19.938 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:19.938 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.938 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.938 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:19.938 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:19.938 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.938 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:19.938 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:19.938 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:20.201 01:14:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:20.460 /dev/nbd0 00:15:20.460 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:20.460 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:20.460 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:20.460 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:20.460 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:20.460 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:20.460 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:20.460 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:20.460 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:20.460 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:20.460 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.460 1+0 records in 00:15:20.460 1+0 records out 00:15:20.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000692149 s, 5.9 MB/s 00:15:20.460 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.460 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:20.460 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.460 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:20.460 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:20.460 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:20.460 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:20.460 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:20.719 /dev/nbd1 00:15:20.719 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:20.719 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:20.719 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:20.719 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:20.719 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:20.719 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:20.719 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:20.719 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:20.719 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:20.719 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:20.719 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.719 1+0 records in 00:15:20.719 1+0 records out 00:15:20.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401171 s, 10.2 MB/s 00:15:20.719 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.719 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:20.719 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.719 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:20.719 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:20.719 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:20.719 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:20.719 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:20.977 /dev/nbd10 00:15:20.977 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:20.977 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:20.977 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:20.977 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:20.977 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:20.977 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:20.977 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:20.977 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:20.977 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:20.977 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:20.977 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.977 1+0 records in 00:15:20.977 1+0 records out 00:15:20.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000972569 s, 4.2 MB/s 00:15:20.977 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.977 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:20.977 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.977 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:20.977 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:20.977 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:20.977 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:20.977 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:21.235 /dev/nbd11 00:15:21.235 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:21.235 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:21.235 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:21.235 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:21.235 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:21.235 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:21.235 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:21.235 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:21.235 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:21.235 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:21.235 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:21.235 1+0 records in 00:15:21.235 1+0 records out 00:15:21.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000845139 s, 4.8 MB/s 00:15:21.235 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.235 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:21.235 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.235 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:21.235 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:21.235 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:21.235 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:21.235 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:21.494 /dev/nbd12 00:15:21.494 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:21.494 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:21.494 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:21.494 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:21.494 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:21.494 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:21.494 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:21.494 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:21.494 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:21.494 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:21.494 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:21.494 1+0 records in 00:15:21.494 1+0 records out 00:15:21.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000973278 s, 4.2 MB/s 00:15:21.494 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.494 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:21.494 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.494 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:21.494 01:14:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:21.494 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:21.494 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:21.494 01:14:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:21.752 /dev/nbd13 00:15:21.752 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:21.752 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:21.752 01:14:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:21.752 01:14:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:21.752 01:14:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:21.752 01:14:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:21.752 01:14:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:21.752 01:14:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:21.752 01:14:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:21.752 01:14:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:21.752 01:14:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:21.752 1+0 records in 00:15:21.752 1+0 records out 00:15:21.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107681 s, 3.8 MB/s 00:15:21.752 01:14:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.752 01:14:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:21.752 01:14:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.752 01:14:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:21.752 01:14:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:21.752 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:21.752 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:21.752 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:21.752 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:21.752 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:22.010 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:22.010 { 00:15:22.010 "nbd_device": "/dev/nbd0", 00:15:22.010 "bdev_name": "nvme0n1" 00:15:22.010 }, 00:15:22.010 { 00:15:22.010 "nbd_device": "/dev/nbd1", 00:15:22.010 "bdev_name": "nvme0n2" 00:15:22.010 }, 00:15:22.010 { 00:15:22.010 "nbd_device": "/dev/nbd10", 00:15:22.010 "bdev_name": "nvme0n3" 00:15:22.010 }, 00:15:22.010 { 00:15:22.010 "nbd_device": "/dev/nbd11", 00:15:22.010 "bdev_name": "nvme1n1" 00:15:22.010 }, 00:15:22.010 { 00:15:22.010 "nbd_device": "/dev/nbd12", 00:15:22.010 "bdev_name": "nvme2n1" 00:15:22.010 }, 00:15:22.010 { 00:15:22.010 "nbd_device": "/dev/nbd13", 00:15:22.010 "bdev_name": "nvme3n1" 00:15:22.010 } 00:15:22.010 ]' 00:15:22.010 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:22.010 { 00:15:22.010 "nbd_device": "/dev/nbd0", 00:15:22.010 "bdev_name": "nvme0n1" 00:15:22.010 }, 00:15:22.010 { 00:15:22.010 "nbd_device": "/dev/nbd1", 00:15:22.010 "bdev_name": "nvme0n2" 00:15:22.010 }, 00:15:22.010 { 00:15:22.010 "nbd_device": "/dev/nbd10", 00:15:22.010 "bdev_name": "nvme0n3" 00:15:22.010 }, 00:15:22.010 { 00:15:22.010 "nbd_device": "/dev/nbd11", 00:15:22.010 "bdev_name": "nvme1n1" 00:15:22.010 }, 00:15:22.010 { 00:15:22.010 "nbd_device": "/dev/nbd12", 00:15:22.010 "bdev_name": "nvme2n1" 00:15:22.010 }, 00:15:22.010 { 00:15:22.010 "nbd_device": "/dev/nbd13", 00:15:22.010 "bdev_name": "nvme3n1" 00:15:22.010 } 00:15:22.010 ]' 00:15:22.010 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:22.010 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:22.010 /dev/nbd1 00:15:22.010 /dev/nbd10 00:15:22.010 /dev/nbd11 00:15:22.010 /dev/nbd12 00:15:22.010 /dev/nbd13' 00:15:22.010 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:22.010 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:22.010 /dev/nbd1 00:15:22.010 /dev/nbd10 00:15:22.010 /dev/nbd11 00:15:22.010 /dev/nbd12 00:15:22.010 /dev/nbd13' 00:15:22.010 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:22.010 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:22.010 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:22.010 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:22.010 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:22.010 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:22.010 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:22.010 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:22.010 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:22.010 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:22.010 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:22.010 256+0 records in 00:15:22.010 256+0 records out 00:15:22.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00741812 s, 141 MB/s 00:15:22.010 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:22.010 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:22.010 256+0 records in 00:15:22.010 256+0 records out 00:15:22.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170243 s, 6.2 MB/s 00:15:22.010 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:22.010 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:22.271 256+0 records in 00:15:22.271 256+0 records out 00:15:22.271 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.218724 s, 4.8 MB/s 00:15:22.271 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:22.271 01:14:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:22.532 256+0 records in 00:15:22.532 256+0 records out 00:15:22.532 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238414 s, 4.4 MB/s 00:15:22.532 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:22.532 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:22.794 256+0 records in 00:15:22.794 256+0 records out 00:15:22.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.203246 s, 5.2 MB/s 00:15:22.794 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:22.794 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:23.056 256+0 records in 00:15:23.056 256+0 records out 00:15:23.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.302952 s, 3.5 MB/s 00:15:23.056 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:23.056 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:23.317 256+0 records in 00:15:23.318 256+0 records out 00:15:23.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.241552 s, 4.3 MB/s 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:23.318 01:14:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:23.579 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:23.579 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:23.579 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:23.579 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.579 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.579 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:23.579 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:23.579 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.579 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:23.579 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:23.840 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:23.840 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:23.840 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:23.840 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.840 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.840 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:23.840 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:23.840 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.840 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:23.840 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:24.101 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:24.101 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:24.101 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:24.101 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:24.101 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:24.101 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:24.101 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:24.101 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:24.101 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:24.101 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:24.361 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:24.361 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:24.361 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:24.361 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:24.361 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:24.361 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:24.361 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:24.361 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:24.361 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:24.361 01:14:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:24.622 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:24.622 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:24.622 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:24.622 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:24.622 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:24.622 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:24.622 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:24.622 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:24.622 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:24.622 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:24.884 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:24.884 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:24.884 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:24.884 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:24.884 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:24.884 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:24.884 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:24.884 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:24.884 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:24.884 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:24.884 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:24.884 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:24.884 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:24.884 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:25.143 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:25.144 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:25.144 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:25.144 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:25.144 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:25.144 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:25.144 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:25.144 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:25.144 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:25.144 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:25.144 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:25.144 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:25.144 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:25.144 malloc_lvol_verify 00:15:25.144 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:25.402 d0806327-6a72-4980-a0bc-7438b43c592c 00:15:25.402 01:14:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:25.661 e7899c8c-54f1-4e8e-9a81-283b6aaf4cdf 00:15:25.661 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:25.919 /dev/nbd0 00:15:25.919 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:25.919 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:25.919 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:25.919 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:25.919 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:25.919 mke2fs 1.47.0 (5-Feb-2023) 00:15:25.919 Discarding device blocks: 0/4096 done 00:15:25.919 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:25.919 00:15:25.919 Allocating group tables: 0/1 done 00:15:25.919 Writing inode tables: 0/1 done 00:15:25.919 Creating journal (1024 blocks): done 00:15:25.919 Writing superblocks and filesystem accounting information: 0/1 done 00:15:25.919 00:15:25.919 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:25.919 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:25.919 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:25.919 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:25.919 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:25.919 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.919 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:26.178 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:26.179 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:26.179 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:26.179 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.179 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.179 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:26.179 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:26.179 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.179 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 85170 00:15:26.179 01:14:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 85170 ']' 00:15:26.179 01:14:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 85170 00:15:26.179 01:14:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:26.179 01:14:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:26.179 01:14:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85170 00:15:26.179 01:14:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:26.179 killing process with pid 85170 00:15:26.179 01:14:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:26.179 01:14:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85170' 00:15:26.179 01:14:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 85170 00:15:26.179 01:14:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 85170 00:15:26.179 ************************************ 00:15:26.179 END TEST bdev_nbd 00:15:26.179 ************************************ 00:15:26.179 01:14:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:26.179 00:15:26.179 real 0m10.315s 00:15:26.179 user 0m14.041s 00:15:26.179 sys 0m3.778s 00:15:26.179 01:14:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:26.179 01:14:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:26.440 01:14:59 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:26.440 01:14:59 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:26.440 01:14:59 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:26.440 01:14:59 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:26.440 01:14:59 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:26.440 01:14:59 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:26.440 01:14:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:26.440 ************************************ 00:15:26.440 START TEST bdev_fio 00:15:26.440 ************************************ 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:26.440 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:26.440 01:14:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:26.440 ************************************ 00:15:26.440 START TEST bdev_fio_rw_verify 00:15:26.441 ************************************ 00:15:26.441 01:14:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:26.441 01:14:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:26.441 01:14:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:26.441 01:14:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:26.441 01:14:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:26.441 01:14:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:26.441 01:14:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:26.441 01:14:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:26.441 01:14:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:26.441 01:14:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:26.441 01:14:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:26.441 01:14:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:26.441 01:14:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:26.441 01:14:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:26.441 01:14:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:26.441 01:14:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:26.441 01:14:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:26.700 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:26.700 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:26.700 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:26.700 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:26.700 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:26.700 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:26.700 fio-3.35 00:15:26.700 Starting 6 threads 00:15:38.937 00:15:38.937 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=85577: Sat Dec 14 01:15:10 2024 00:15:38.937 read: IOPS=14.5k, BW=56.7MiB/s (59.4MB/s)(567MiB/10002msec) 00:15:38.937 slat (usec): min=2, max=1627, avg= 6.51, stdev=12.96 00:15:38.937 clat (usec): min=96, max=7367, avg=1349.07, stdev=723.64 00:15:38.937 lat (usec): min=99, max=7394, avg=1355.58, stdev=724.16 00:15:38.937 clat percentiles (usec): 00:15:38.937 | 50.000th=[ 1254], 99.000th=[ 3654], 99.900th=[ 5080], 99.990th=[ 6980], 00:15:38.937 | 99.999th=[ 7373] 00:15:38.937 write: IOPS=14.7k, BW=57.6MiB/s (60.4MB/s)(576MiB/10002msec); 0 zone resets 00:15:38.937 slat (usec): min=3, max=4124, avg=39.49, stdev=134.70 00:15:38.937 clat (usec): min=96, max=8393, avg=1608.59, stdev=789.34 00:15:38.937 lat (usec): min=111, max=8431, avg=1648.07, stdev=800.93 00:15:38.937 clat percentiles (usec): 00:15:38.937 | 50.000th=[ 1483], 99.000th=[ 4080], 99.900th=[ 5342], 99.990th=[ 6652], 00:15:38.937 | 99.999th=[ 7767] 00:15:38.937 bw ( KiB/s): min=48669, max=80541, per=100.00%, avg=59338.84, stdev=1638.56, samples=114 00:15:38.937 iops : min=12165, max=20134, avg=14834.11, stdev=409.63, samples=114 00:15:38.937 lat (usec) : 100=0.01%, 250=1.47%, 500=5.13%, 750=8.92%, 1000=12.15% 00:15:38.937 lat (msec) : 2=51.81%, 4=19.67%, 10=0.84% 00:15:38.937 cpu : usr=44.25%, sys=31.30%, ctx=5598, majf=0, minf=16481 00:15:38.937 IO depths : 1=11.6%, 2=24.0%, 4=51.0%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:38.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.937 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.937 issued rwts: total=145113,147458,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.937 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:38.937 00:15:38.937 Run status group 0 (all jobs): 00:15:38.937 READ: bw=56.7MiB/s (59.4MB/s), 56.7MiB/s-56.7MiB/s (59.4MB/s-59.4MB/s), io=567MiB (594MB), run=10002-10002msec 00:15:38.937 WRITE: bw=57.6MiB/s (60.4MB/s), 57.6MiB/s-57.6MiB/s (60.4MB/s-60.4MB/s), io=576MiB (604MB), run=10002-10002msec 00:15:38.937 ----------------------------------------------------- 00:15:38.937 Suppressions used: 00:15:38.937 count bytes template 00:15:38.937 6 48 /usr/src/fio/parse.c 00:15:38.937 2246 215616 /usr/src/fio/iolog.c 00:15:38.937 1 8 libtcmalloc_minimal.so 00:15:38.937 1 904 libcrypto.so 00:15:38.937 ----------------------------------------------------- 00:15:38.937 00:15:38.937 00:15:38.937 real 0m11.156s 00:15:38.937 user 0m27.276s 00:15:38.937 sys 0m19.100s 00:15:38.937 ************************************ 00:15:38.937 END TEST bdev_fio_rw_verify 00:15:38.937 ************************************ 00:15:38.937 01:15:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:38.937 01:15:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:38.937 01:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:38.937 01:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:38.937 01:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:38.937 01:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:38.937 01:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:15:38.938 01:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:15:38.938 01:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:38.938 01:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:38.938 01:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:38.938 01:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:15:38.938 01:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:38.938 01:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:38.938 01:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:38.938 01:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:15:38.938 01:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:15:38.938 01:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:15:38.938 01:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:38.938 01:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "9f3eaf5b-c02a-4ea7-90e1-ba93a644b96e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9f3eaf5b-c02a-4ea7-90e1-ba93a644b96e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "4c7adcbb-e6c9-4e00-a126-4caf38def00a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4c7adcbb-e6c9-4e00-a126-4caf38def00a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "ed24434f-8681-4dbf-875d-45faa25d076d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ed24434f-8681-4dbf-875d-45faa25d076d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "8110db45-694c-468d-bc1e-3a4acaed3a0d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8110db45-694c-468d-bc1e-3a4acaed3a0d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "0a47357b-8bbf-4fa9-a92a-14474e8eb696"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0a47357b-8bbf-4fa9-a92a-14474e8eb696",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "f8e5677f-fbaa-4772-aae8-b2ab3d0dc1cb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f8e5677f-fbaa-4772-aae8-b2ab3d0dc1cb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:38.938 01:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:38.938 01:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:38.938 /home/vagrant/spdk_repo/spdk 00:15:38.938 01:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:38.938 01:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:38.938 01:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:38.938 00:15:38.938 real 0m11.331s 00:15:38.938 user 0m27.351s 00:15:38.938 sys 0m19.180s 00:15:38.938 ************************************ 00:15:38.938 END TEST bdev_fio 00:15:38.938 ************************************ 00:15:38.938 01:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:38.938 01:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:38.938 01:15:11 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:38.938 01:15:11 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:38.938 01:15:11 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:38.938 01:15:11 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:38.938 01:15:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:38.938 ************************************ 00:15:38.938 START TEST bdev_verify 00:15:38.938 ************************************ 00:15:38.938 01:15:11 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:38.938 [2024-12-14 01:15:11.311594] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:38.938 [2024-12-14 01:15:11.311756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85742 ] 00:15:38.938 [2024-12-14 01:15:11.460405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:38.938 [2024-12-14 01:15:11.491164] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.938 [2024-12-14 01:15:11.491212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.938 Running I/O for 5 seconds... 00:15:40.458 26304.00 IOPS, 102.75 MiB/s [2024-12-14T01:15:15.075Z] 25584.00 IOPS, 99.94 MiB/s [2024-12-14T01:15:16.021Z] 24789.33 IOPS, 96.83 MiB/s [2024-12-14T01:15:16.963Z] 24400.00 IOPS, 95.31 MiB/s [2024-12-14T01:15:16.963Z] 24070.40 IOPS, 94.02 MiB/s 00:15:43.351 Latency(us) 00:15:43.351 [2024-12-14T01:15:16.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.351 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:43.351 Verification LBA range: start 0x0 length 0x80000 00:15:43.351 nvme0n1 : 5.04 1778.22 6.95 0.00 0.00 71854.98 11897.30 69770.63 00:15:43.351 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:43.351 Verification LBA range: start 0x80000 length 0x80000 00:15:43.351 nvme0n1 : 5.03 1986.44 7.76 0.00 0.00 64328.69 6351.95 61301.37 00:15:43.351 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:43.351 Verification LBA range: start 0x0 length 0x80000 00:15:43.351 nvme0n2 : 5.05 1798.17 7.02 0.00 0.00 70926.81 6906.49 70980.53 00:15:43.351 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:43.351 Verification LBA range: start 0x80000 length 0x80000 00:15:43.351 nvme0n2 : 5.05 1975.32 7.72 0.00 0.00 64584.00 10536.17 63317.86 00:15:43.351 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:43.351 Verification LBA range: start 0x0 length 0x80000 00:15:43.351 nvme0n3 : 5.04 1779.51 6.95 0.00 0.00 71550.80 9830.40 70980.53 00:15:43.351 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:43.351 Verification LBA range: start 0x80000 length 0x80000 00:15:43.351 nvme0n3 : 5.06 1973.60 7.71 0.00 0.00 64540.00 13107.20 60898.07 00:15:43.351 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:43.351 Verification LBA range: start 0x0 length 0xa0000 00:15:43.351 nvme1n1 : 5.05 1775.11 6.93 0.00 0.00 71605.89 10384.94 71787.13 00:15:43.351 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:43.351 Verification LBA range: start 0xa0000 length 0xa0000 00:15:43.351 nvme1n1 : 5.04 1982.10 7.74 0.00 0.00 64151.16 10687.41 69367.34 00:15:43.351 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:43.351 Verification LBA range: start 0x0 length 0xbd0bd 00:15:43.351 nvme2n1 : 5.06 2385.02 9.32 0.00 0.00 53145.99 4814.38 62107.96 00:15:43.351 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:43.351 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:43.351 nvme2n1 : 5.08 2546.66 9.95 0.00 0.00 49797.99 4864.79 54041.99 00:15:43.351 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:43.351 Verification LBA range: start 0x0 length 0x20000 00:15:43.351 nvme3n1 : 5.08 1841.10 7.19 0.00 0.00 68687.73 4537.11 64527.75 00:15:43.351 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:43.351 Verification LBA range: start 0x20000 length 0x20000 00:15:43.351 nvme3n1 : 5.07 1995.24 7.79 0.00 0.00 63534.51 5368.91 65334.35 00:15:43.351 [2024-12-14T01:15:16.963Z] =================================================================================================================== 00:15:43.351 [2024-12-14T01:15:16.963Z] Total : 23816.48 93.03 0.00 0.00 64080.57 4537.11 71787.13 00:15:43.612 00:15:43.612 real 0m5.859s 00:15:43.612 user 0m9.373s 00:15:43.612 sys 0m1.458s 00:15:43.612 01:15:17 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:43.612 ************************************ 00:15:43.612 END TEST bdev_verify 00:15:43.612 ************************************ 00:15:43.612 01:15:17 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:43.612 01:15:17 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:43.612 01:15:17 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:43.612 01:15:17 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.612 01:15:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.612 ************************************ 00:15:43.612 START TEST bdev_verify_big_io 00:15:43.612 ************************************ 00:15:43.612 01:15:17 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:43.873 [2024-12-14 01:15:17.238567] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:43.873 [2024-12-14 01:15:17.238726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85833 ] 00:15:43.873 [2024-12-14 01:15:17.382475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:43.873 [2024-12-14 01:15:17.412325] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.873 [2024-12-14 01:15:17.412373] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.134 Running I/O for 5 seconds... 00:15:49.986 824.00 IOPS, 51.50 MiB/s [2024-12-14T01:15:23.859Z] 2724.50 IOPS, 170.28 MiB/s [2024-12-14T01:15:23.859Z] 2875.33 IOPS, 179.71 MiB/s 00:15:50.247 Latency(us) 00:15:50.247 [2024-12-14T01:15:23.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.247 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:50.247 Verification LBA range: start 0x0 length 0x8000 00:15:50.247 nvme0n1 : 5.96 107.38 6.71 0.00 0.00 1128919.36 125022.52 1213121.77 00:15:50.247 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:50.247 Verification LBA range: start 0x8000 length 0x8000 00:15:50.247 nvme0n1 : 5.89 108.60 6.79 0.00 0.00 1128133.55 153253.42 1064707.94 00:15:50.247 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:50.247 Verification LBA range: start 0x0 length 0x8000 00:15:50.247 nvme0n2 : 5.97 144.62 9.04 0.00 0.00 842678.23 6326.74 1329271.73 00:15:50.247 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:50.247 Verification LBA range: start 0x8000 length 0x8000 00:15:50.247 nvme0n2 : 5.86 141.90 8.87 0.00 0.00 857902.60 66140.95 858219.13 00:15:50.247 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:50.247 Verification LBA range: start 0x0 length 0x8000 00:15:50.247 nvme0n3 : 5.98 125.81 7.86 0.00 0.00 940064.38 14922.04 1729343.80 00:15:50.247 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:50.247 Verification LBA range: start 0x8000 length 0x8000 00:15:50.247 nvme0n3 : 5.87 132.29 8.27 0.00 0.00 875609.16 10687.41 845313.58 00:15:50.247 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:50.247 Verification LBA range: start 0x0 length 0xa000 00:15:50.247 nvme1n1 : 5.98 114.99 7.19 0.00 0.00 995713.71 18148.43 1832588.21 00:15:50.247 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:50.247 Verification LBA range: start 0xa000 length 0xa000 00:15:50.247 nvme1n1 : 5.89 152.15 9.51 0.00 0.00 754534.51 17745.13 1109877.37 00:15:50.247 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:50.247 Verification LBA range: start 0x0 length 0xbd0b 00:15:50.247 nvme2n1 : 5.97 142.07 8.88 0.00 0.00 782581.52 10737.82 1497043.89 00:15:50.247 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:50.247 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:50.247 nvme2n1 : 5.88 163.37 10.21 0.00 0.00 674810.38 6604.01 761427.50 00:15:50.247 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:50.247 Verification LBA range: start 0x0 length 0x2000 00:15:50.247 nvme3n1 : 5.98 135.44 8.47 0.00 0.00 794769.85 6956.90 1471232.79 00:15:50.247 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:50.247 Verification LBA range: start 0x2000 length 0x2000 00:15:50.247 nvme3n1 : 5.89 133.08 8.32 0.00 0.00 806212.01 8973.39 774333.05 00:15:50.247 [2024-12-14T01:15:23.859Z] =================================================================================================================== 00:15:50.247 [2024-12-14T01:15:23.859Z] Total : 1601.68 100.11 0.00 0.00 866297.03 6326.74 1832588.21 00:15:50.508 00:15:50.508 real 0m6.752s 00:15:50.508 user 0m12.431s 00:15:50.508 sys 0m0.430s 00:15:50.508 01:15:23 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.508 ************************************ 00:15:50.508 END TEST bdev_verify_big_io 00:15:50.508 ************************************ 00:15:50.508 01:15:23 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.508 01:15:23 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:50.508 01:15:23 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:50.508 01:15:23 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.508 01:15:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.508 ************************************ 00:15:50.508 START TEST bdev_write_zeroes 00:15:50.508 ************************************ 00:15:50.508 01:15:23 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:50.508 [2024-12-14 01:15:24.064909] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:50.508 [2024-12-14 01:15:24.065045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85932 ] 00:15:50.769 [2024-12-14 01:15:24.211126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.769 [2024-12-14 01:15:24.240462] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.029 Running I/O for 1 seconds... 00:15:51.969 77632.00 IOPS, 303.25 MiB/s 00:15:51.969 Latency(us) 00:15:51.969 [2024-12-14T01:15:25.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.969 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:51.969 nvme0n1 : 1.02 12779.14 49.92 0.00 0.00 10006.22 7108.14 20870.70 00:15:51.969 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:51.969 nvme0n2 : 1.02 12764.42 49.86 0.00 0.00 10009.27 7208.96 22383.06 00:15:51.969 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:51.969 nvme0n3 : 1.02 12749.74 49.80 0.00 0.00 10011.88 7208.96 23996.26 00:15:51.969 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:51.969 nvme1n1 : 1.03 12735.54 49.75 0.00 0.00 10010.69 7208.96 23794.61 00:15:51.969 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:51.969 nvme2n1 : 1.02 13486.11 52.68 0.00 0.00 9443.60 3806.13 20870.70 00:15:51.969 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:51.969 nvme3n1 : 1.03 12720.99 49.69 0.00 0.00 9935.82 3932.16 20669.05 00:15:51.969 [2024-12-14T01:15:25.581Z] =================================================================================================================== 00:15:51.969 [2024-12-14T01:15:25.581Z] Total : 77235.94 301.70 0.00 0.00 9898.82 3806.13 23996.26 00:15:52.229 00:15:52.229 real 0m1.761s 00:15:52.229 user 0m1.094s 00:15:52.229 sys 0m0.476s 00:15:52.229 01:15:25 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:52.229 01:15:25 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:52.229 ************************************ 00:15:52.229 END TEST bdev_write_zeroes 00:15:52.229 ************************************ 00:15:52.229 01:15:25 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:52.229 01:15:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:52.229 01:15:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.229 01:15:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:52.229 ************************************ 00:15:52.229 START TEST bdev_json_nonenclosed 00:15:52.229 ************************************ 00:15:52.229 01:15:25 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:52.487 [2024-12-14 01:15:25.883098] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:52.487 [2024-12-14 01:15:25.883234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85975 ] 00:15:52.487 [2024-12-14 01:15:26.031311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.487 [2024-12-14 01:15:26.059776] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.487 [2024-12-14 01:15:26.059880] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:52.487 [2024-12-14 01:15:26.059900] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:52.488 [2024-12-14 01:15:26.059914] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:52.746 00:15:52.746 real 0m0.319s 00:15:52.746 user 0m0.122s 00:15:52.746 sys 0m0.093s 00:15:52.746 01:15:26 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:52.746 01:15:26 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:52.746 ************************************ 00:15:52.746 END TEST bdev_json_nonenclosed 00:15:52.746 ************************************ 00:15:52.746 01:15:26 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:52.746 01:15:26 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:52.746 01:15:26 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.746 01:15:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:52.746 ************************************ 00:15:52.746 START TEST bdev_json_nonarray 00:15:52.746 ************************************ 00:15:52.746 01:15:26 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:52.746 [2024-12-14 01:15:26.255056] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:52.746 [2024-12-14 01:15:26.255169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85995 ] 00:15:53.004 [2024-12-14 01:15:26.399596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.004 [2024-12-14 01:15:26.418436] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.004 [2024-12-14 01:15:26.418518] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:53.004 [2024-12-14 01:15:26.418533] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:53.004 [2024-12-14 01:15:26.418545] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:53.004 00:15:53.004 real 0m0.285s 00:15:53.004 user 0m0.114s 00:15:53.004 sys 0m0.068s 00:15:53.004 ************************************ 00:15:53.004 01:15:26 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.004 01:15:26 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:53.004 END TEST bdev_json_nonarray 00:15:53.004 ************************************ 00:15:53.004 01:15:26 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:15:53.004 01:15:26 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:15:53.004 01:15:26 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:15:53.004 01:15:26 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:15:53.004 01:15:26 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:15:53.004 01:15:26 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:53.004 01:15:26 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:53.004 01:15:26 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:53.004 01:15:26 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:53.004 01:15:26 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:53.004 01:15:26 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:53.004 01:15:26 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:53.573 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:00.217 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:01.611 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:01.611 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:01.872 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:01.872 00:16:01.872 real 0m51.358s 00:16:01.872 user 1m12.992s 00:16:01.872 sys 0m43.007s 00:16:01.872 01:15:35 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.872 01:15:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:01.872 ************************************ 00:16:01.872 END TEST blockdev_xnvme 00:16:01.872 ************************************ 00:16:01.872 01:15:35 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:01.872 01:15:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:01.872 01:15:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.872 01:15:35 -- common/autotest_common.sh@10 -- # set +x 00:16:01.872 ************************************ 00:16:01.872 START TEST ublk 00:16:01.872 ************************************ 00:16:01.872 01:15:35 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:01.872 * Looking for test storage... 00:16:01.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:01.872 01:15:35 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:01.872 01:15:35 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:16:01.872 01:15:35 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:02.133 01:15:35 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:02.133 01:15:35 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:02.133 01:15:35 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:02.133 01:15:35 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:02.133 01:15:35 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.133 01:15:35 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:02.133 01:15:35 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:02.133 01:15:35 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:02.133 01:15:35 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:02.133 01:15:35 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:02.133 01:15:35 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:02.133 01:15:35 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:02.133 01:15:35 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:02.133 01:15:35 ublk -- scripts/common.sh@345 -- # : 1 00:16:02.133 01:15:35 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:02.133 01:15:35 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.133 01:15:35 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:02.133 01:15:35 ublk -- scripts/common.sh@353 -- # local d=1 00:16:02.133 01:15:35 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.133 01:15:35 ublk -- scripts/common.sh@355 -- # echo 1 00:16:02.133 01:15:35 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:02.133 01:15:35 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:02.133 01:15:35 ublk -- scripts/common.sh@353 -- # local d=2 00:16:02.133 01:15:35 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:02.133 01:15:35 ublk -- scripts/common.sh@355 -- # echo 2 00:16:02.133 01:15:35 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:02.133 01:15:35 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:02.133 01:15:35 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:02.133 01:15:35 ublk -- scripts/common.sh@368 -- # return 0 00:16:02.133 01:15:35 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:02.133 01:15:35 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:02.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.133 --rc genhtml_branch_coverage=1 00:16:02.133 --rc genhtml_function_coverage=1 00:16:02.133 --rc genhtml_legend=1 00:16:02.133 --rc geninfo_all_blocks=1 00:16:02.133 --rc geninfo_unexecuted_blocks=1 00:16:02.133 00:16:02.133 ' 00:16:02.133 01:15:35 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:02.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.133 --rc genhtml_branch_coverage=1 00:16:02.133 --rc genhtml_function_coverage=1 00:16:02.133 --rc genhtml_legend=1 00:16:02.133 --rc geninfo_all_blocks=1 00:16:02.133 --rc geninfo_unexecuted_blocks=1 00:16:02.133 00:16:02.133 ' 00:16:02.133 01:15:35 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:02.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.133 --rc genhtml_branch_coverage=1 00:16:02.133 --rc genhtml_function_coverage=1 00:16:02.133 --rc genhtml_legend=1 00:16:02.133 --rc geninfo_all_blocks=1 00:16:02.133 --rc geninfo_unexecuted_blocks=1 00:16:02.133 00:16:02.133 ' 00:16:02.133 01:15:35 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:02.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.133 --rc genhtml_branch_coverage=1 00:16:02.133 --rc genhtml_function_coverage=1 00:16:02.133 --rc genhtml_legend=1 00:16:02.133 --rc geninfo_all_blocks=1 00:16:02.133 --rc geninfo_unexecuted_blocks=1 00:16:02.133 00:16:02.133 ' 00:16:02.133 01:15:35 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:02.133 01:15:35 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:02.133 01:15:35 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:02.133 01:15:35 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:02.133 01:15:35 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:02.133 01:15:35 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:02.133 01:15:35 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:02.133 01:15:35 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:02.133 01:15:35 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:02.133 01:15:35 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:02.133 01:15:35 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:02.133 01:15:35 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:02.133 01:15:35 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:02.133 01:15:35 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:02.133 01:15:35 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:02.133 01:15:35 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:02.133 01:15:35 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:02.133 01:15:35 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:02.133 01:15:35 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:02.133 01:15:35 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:02.133 01:15:35 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:02.133 01:15:35 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.133 01:15:35 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:02.133 ************************************ 00:16:02.133 START TEST test_save_ublk_config 00:16:02.133 ************************************ 00:16:02.133 01:15:35 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:02.133 01:15:35 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:02.133 01:15:35 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=86291 00:16:02.133 01:15:35 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:02.133 01:15:35 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 86291 00:16:02.133 01:15:35 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 86291 ']' 00:16:02.133 01:15:35 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.133 01:15:35 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.133 01:15:35 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.133 01:15:35 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.133 01:15:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:02.133 01:15:35 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:02.133 [2024-12-14 01:15:35.663574] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:02.133 [2024-12-14 01:15:35.663752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86291 ] 00:16:02.394 [2024-12-14 01:15:35.803320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.394 [2024-12-14 01:15:35.832125] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.967 01:15:36 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:02.967 01:15:36 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:02.967 01:15:36 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:02.967 01:15:36 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:02.967 01:15:36 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.967 01:15:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:02.967 [2024-12-14 01:15:36.527649] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:02.967 [2024-12-14 01:15:36.528604] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:02.967 malloc0 00:16:02.967 [2024-12-14 01:15:36.559761] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:02.967 [2024-12-14 01:15:36.559843] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:02.967 [2024-12-14 01:15:36.559852] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:02.967 [2024-12-14 01:15:36.559869] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:02.967 [2024-12-14 01:15:36.575658] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:02.967 [2024-12-14 01:15:36.575693] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:03.229 [2024-12-14 01:15:36.583655] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:03.229 [2024-12-14 01:15:36.583777] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:03.229 [2024-12-14 01:15:36.600648] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:03.229 0 00:16:03.229 01:15:36 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.229 01:15:36 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:03.229 01:15:36 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.229 01:15:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:03.490 01:15:36 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.490 01:15:36 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:03.490 "subsystems": [ 00:16:03.490 { 00:16:03.490 "subsystem": "fsdev", 00:16:03.490 "config": [ 00:16:03.490 { 00:16:03.490 "method": "fsdev_set_opts", 00:16:03.490 "params": { 00:16:03.490 "fsdev_io_pool_size": 65535, 00:16:03.490 "fsdev_io_cache_size": 256 00:16:03.490 } 00:16:03.490 } 00:16:03.490 ] 00:16:03.490 }, 00:16:03.490 { 00:16:03.490 "subsystem": "keyring", 00:16:03.490 "config": [] 00:16:03.490 }, 00:16:03.490 { 00:16:03.490 "subsystem": "iobuf", 00:16:03.490 "config": [ 00:16:03.490 { 00:16:03.490 "method": "iobuf_set_options", 00:16:03.490 "params": { 00:16:03.490 "small_pool_count": 8192, 00:16:03.490 "large_pool_count": 1024, 00:16:03.490 "small_bufsize": 8192, 00:16:03.490 "large_bufsize": 135168, 00:16:03.490 "enable_numa": false 00:16:03.490 } 00:16:03.490 } 00:16:03.490 ] 00:16:03.490 }, 00:16:03.490 { 00:16:03.490 "subsystem": "sock", 00:16:03.490 "config": [ 00:16:03.490 { 00:16:03.490 "method": "sock_set_default_impl", 00:16:03.490 "params": { 00:16:03.490 "impl_name": "posix" 00:16:03.490 } 00:16:03.490 }, 00:16:03.490 { 00:16:03.490 "method": "sock_impl_set_options", 00:16:03.490 "params": { 00:16:03.490 "impl_name": "ssl", 00:16:03.490 "recv_buf_size": 4096, 00:16:03.490 "send_buf_size": 4096, 00:16:03.490 "enable_recv_pipe": true, 00:16:03.490 "enable_quickack": false, 00:16:03.490 "enable_placement_id": 0, 00:16:03.490 "enable_zerocopy_send_server": true, 00:16:03.490 "enable_zerocopy_send_client": false, 00:16:03.490 "zerocopy_threshold": 0, 00:16:03.490 "tls_version": 0, 00:16:03.490 "enable_ktls": false 00:16:03.490 } 00:16:03.490 }, 00:16:03.491 { 00:16:03.491 "method": "sock_impl_set_options", 00:16:03.491 "params": { 00:16:03.491 "impl_name": "posix", 00:16:03.491 "recv_buf_size": 2097152, 00:16:03.491 "send_buf_size": 2097152, 00:16:03.491 "enable_recv_pipe": true, 00:16:03.491 "enable_quickack": false, 00:16:03.491 "enable_placement_id": 0, 00:16:03.491 "enable_zerocopy_send_server": true, 00:16:03.491 "enable_zerocopy_send_client": false, 00:16:03.491 "zerocopy_threshold": 0, 00:16:03.491 "tls_version": 0, 00:16:03.491 "enable_ktls": false 00:16:03.491 } 00:16:03.491 } 00:16:03.491 ] 00:16:03.491 }, 00:16:03.491 { 00:16:03.491 "subsystem": "vmd", 00:16:03.491 "config": [] 00:16:03.491 }, 00:16:03.491 { 00:16:03.491 "subsystem": "accel", 00:16:03.491 "config": [ 00:16:03.491 { 00:16:03.491 "method": "accel_set_options", 00:16:03.491 "params": { 00:16:03.491 "small_cache_size": 128, 00:16:03.491 "large_cache_size": 16, 00:16:03.491 "task_count": 2048, 00:16:03.491 "sequence_count": 2048, 00:16:03.491 "buf_count": 2048 00:16:03.491 } 00:16:03.491 } 00:16:03.491 ] 00:16:03.491 }, 00:16:03.491 { 00:16:03.491 "subsystem": "bdev", 00:16:03.491 "config": [ 00:16:03.491 { 00:16:03.491 "method": "bdev_set_options", 00:16:03.491 "params": { 00:16:03.491 "bdev_io_pool_size": 65535, 00:16:03.491 "bdev_io_cache_size": 256, 00:16:03.491 "bdev_auto_examine": true, 00:16:03.491 "iobuf_small_cache_size": 128, 00:16:03.491 "iobuf_large_cache_size": 16 00:16:03.491 } 00:16:03.491 }, 00:16:03.491 { 00:16:03.491 "method": "bdev_raid_set_options", 00:16:03.491 "params": { 00:16:03.491 "process_window_size_kb": 1024, 00:16:03.491 "process_max_bandwidth_mb_sec": 0 00:16:03.491 } 00:16:03.491 }, 00:16:03.491 { 00:16:03.491 "method": "bdev_iscsi_set_options", 00:16:03.491 "params": { 00:16:03.491 "timeout_sec": 30 00:16:03.491 } 00:16:03.491 }, 00:16:03.491 { 00:16:03.491 "method": "bdev_nvme_set_options", 00:16:03.491 "params": { 00:16:03.491 "action_on_timeout": "none", 00:16:03.491 "timeout_us": 0, 00:16:03.491 "timeout_admin_us": 0, 00:16:03.491 "keep_alive_timeout_ms": 10000, 00:16:03.491 "arbitration_burst": 0, 00:16:03.491 "low_priority_weight": 0, 00:16:03.491 "medium_priority_weight": 0, 00:16:03.491 "high_priority_weight": 0, 00:16:03.491 "nvme_adminq_poll_period_us": 10000, 00:16:03.491 "nvme_ioq_poll_period_us": 0, 00:16:03.491 "io_queue_requests": 0, 00:16:03.491 "delay_cmd_submit": true, 00:16:03.491 "transport_retry_count": 4, 00:16:03.491 "bdev_retry_count": 3, 00:16:03.491 "transport_ack_timeout": 0, 00:16:03.491 "ctrlr_loss_timeout_sec": 0, 00:16:03.491 "reconnect_delay_sec": 0, 00:16:03.491 "fast_io_fail_timeout_sec": 0, 00:16:03.491 "disable_auto_failback": false, 00:16:03.491 "generate_uuids": false, 00:16:03.491 "transport_tos": 0, 00:16:03.491 "nvme_error_stat": false, 00:16:03.491 "rdma_srq_size": 0, 00:16:03.491 "io_path_stat": false, 00:16:03.491 "allow_accel_sequence": false, 00:16:03.491 "rdma_max_cq_size": 0, 00:16:03.491 "rdma_cm_event_timeout_ms": 0, 00:16:03.491 "dhchap_digests": [ 00:16:03.491 "sha256", 00:16:03.491 "sha384", 00:16:03.491 "sha512" 00:16:03.491 ], 00:16:03.491 "dhchap_dhgroups": [ 00:16:03.491 "null", 00:16:03.491 "ffdhe2048", 00:16:03.491 "ffdhe3072", 00:16:03.491 "ffdhe4096", 00:16:03.491 "ffdhe6144", 00:16:03.491 "ffdhe8192" 00:16:03.491 ], 00:16:03.491 "rdma_umr_per_io": false 00:16:03.491 } 00:16:03.491 }, 00:16:03.491 { 00:16:03.491 "method": "bdev_nvme_set_hotplug", 00:16:03.491 "params": { 00:16:03.491 "period_us": 100000, 00:16:03.491 "enable": false 00:16:03.491 } 00:16:03.491 }, 00:16:03.491 { 00:16:03.491 "method": "bdev_malloc_create", 00:16:03.491 "params": { 00:16:03.491 "name": "malloc0", 00:16:03.491 "num_blocks": 8192, 00:16:03.491 "block_size": 4096, 00:16:03.491 "physical_block_size": 4096, 00:16:03.491 "uuid": "d6835eb9-356a-4285-bcd8-118d41b77941", 00:16:03.491 "optimal_io_boundary": 0, 00:16:03.491 "md_size": 0, 00:16:03.491 "dif_type": 0, 00:16:03.491 "dif_is_head_of_md": false, 00:16:03.491 "dif_pi_format": 0 00:16:03.491 } 00:16:03.491 }, 00:16:03.491 { 00:16:03.491 "method": "bdev_wait_for_examine" 00:16:03.491 } 00:16:03.491 ] 00:16:03.491 }, 00:16:03.491 { 00:16:03.491 "subsystem": "scsi", 00:16:03.491 "config": null 00:16:03.491 }, 00:16:03.491 { 00:16:03.491 "subsystem": "scheduler", 00:16:03.491 "config": [ 00:16:03.491 { 00:16:03.491 "method": "framework_set_scheduler", 00:16:03.491 "params": { 00:16:03.491 "name": "static" 00:16:03.491 } 00:16:03.491 } 00:16:03.491 ] 00:16:03.491 }, 00:16:03.491 { 00:16:03.491 "subsystem": "vhost_scsi", 00:16:03.491 "config": [] 00:16:03.491 }, 00:16:03.491 { 00:16:03.491 "subsystem": "vhost_blk", 00:16:03.491 "config": [] 00:16:03.491 }, 00:16:03.491 { 00:16:03.491 "subsystem": "ublk", 00:16:03.491 "config": [ 00:16:03.491 { 00:16:03.491 "method": "ublk_create_target", 00:16:03.491 "params": { 00:16:03.491 "cpumask": "1" 00:16:03.491 } 00:16:03.491 }, 00:16:03.491 { 00:16:03.491 "method": "ublk_start_disk", 00:16:03.491 "params": { 00:16:03.491 "bdev_name": "malloc0", 00:16:03.491 "ublk_id": 0, 00:16:03.491 "num_queues": 1, 00:16:03.491 "queue_depth": 128 00:16:03.491 } 00:16:03.491 } 00:16:03.491 ] 00:16:03.491 }, 00:16:03.491 { 00:16:03.491 "subsystem": "nbd", 00:16:03.491 "config": [] 00:16:03.491 }, 00:16:03.491 { 00:16:03.491 "subsystem": "nvmf", 00:16:03.491 "config": [ 00:16:03.491 { 00:16:03.491 "method": "nvmf_set_config", 00:16:03.491 "params": { 00:16:03.491 "discovery_filter": "match_any", 00:16:03.491 "admin_cmd_passthru": { 00:16:03.491 "identify_ctrlr": false 00:16:03.491 }, 00:16:03.491 "dhchap_digests": [ 00:16:03.491 "sha256", 00:16:03.491 "sha384", 00:16:03.491 "sha512" 00:16:03.491 ], 00:16:03.491 "dhchap_dhgroups": [ 00:16:03.491 "null", 00:16:03.491 "ffdhe2048", 00:16:03.491 "ffdhe3072", 00:16:03.491 "ffdhe4096", 00:16:03.491 "ffdhe6144", 00:16:03.491 "ffdhe8192" 00:16:03.491 ] 00:16:03.491 } 00:16:03.491 }, 00:16:03.491 { 00:16:03.491 "method": "nvmf_set_max_subsystems", 00:16:03.491 "params": { 00:16:03.491 "max_subsystems": 1024 00:16:03.491 } 00:16:03.491 }, 00:16:03.491 { 00:16:03.491 "method": "nvmf_set_crdt", 00:16:03.491 "params": { 00:16:03.491 "crdt1": 0, 00:16:03.491 "crdt2": 0, 00:16:03.491 "crdt3": 0 00:16:03.491 } 00:16:03.491 } 00:16:03.491 ] 00:16:03.491 }, 00:16:03.491 { 00:16:03.491 "subsystem": "iscsi", 00:16:03.491 "config": [ 00:16:03.491 { 00:16:03.491 "method": "iscsi_set_options", 00:16:03.491 "params": { 00:16:03.491 "node_base": "iqn.2016-06.io.spdk", 00:16:03.491 "max_sessions": 128, 00:16:03.491 "max_connections_per_session": 2, 00:16:03.491 "max_queue_depth": 64, 00:16:03.491 "default_time2wait": 2, 00:16:03.491 "default_time2retain": 20, 00:16:03.491 "first_burst_length": 8192, 00:16:03.491 "immediate_data": true, 00:16:03.491 "allow_duplicated_isid": false, 00:16:03.491 "error_recovery_level": 0, 00:16:03.491 "nop_timeout": 60, 00:16:03.491 "nop_in_interval": 30, 00:16:03.491 "disable_chap": false, 00:16:03.491 "require_chap": false, 00:16:03.491 "mutual_chap": false, 00:16:03.491 "chap_group": 0, 00:16:03.491 "max_large_datain_per_connection": 64, 00:16:03.491 "max_r2t_per_connection": 4, 00:16:03.491 "pdu_pool_size": 36864, 00:16:03.491 "immediate_data_pool_size": 16384, 00:16:03.491 "data_out_pool_size": 2048 00:16:03.491 } 00:16:03.491 } 00:16:03.491 ] 00:16:03.491 } 00:16:03.491 ] 00:16:03.491 }' 00:16:03.491 01:15:36 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 86291 00:16:03.491 01:15:36 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 86291 ']' 00:16:03.491 01:15:36 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 86291 00:16:03.491 01:15:36 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:03.491 01:15:36 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:03.491 01:15:36 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86291 00:16:03.491 01:15:36 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:03.491 killing process with pid 86291 00:16:03.491 01:15:36 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:03.491 01:15:36 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86291' 00:16:03.491 01:15:36 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 86291 00:16:03.491 01:15:36 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 86291 00:16:03.752 [2024-12-14 01:15:37.137202] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:03.752 [2024-12-14 01:15:37.174748] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:03.752 [2024-12-14 01:15:37.174889] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:03.752 [2024-12-14 01:15:37.182656] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:03.752 [2024-12-14 01:15:37.182718] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:03.752 [2024-12-14 01:15:37.182727] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:03.752 [2024-12-14 01:15:37.182759] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:03.752 [2024-12-14 01:15:37.182904] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:04.325 01:15:37 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=86329 00:16:04.325 01:15:37 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 86329 00:16:04.325 01:15:37 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 86329 ']' 00:16:04.325 01:15:37 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.325 01:15:37 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:04.325 01:15:37 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.325 01:15:37 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:04.325 01:15:37 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:04.325 01:15:37 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:04.325 "subsystems": [ 00:16:04.325 { 00:16:04.325 "subsystem": "fsdev", 00:16:04.325 "config": [ 00:16:04.325 { 00:16:04.325 "method": "fsdev_set_opts", 00:16:04.325 "params": { 00:16:04.325 "fsdev_io_pool_size": 65535, 00:16:04.325 "fsdev_io_cache_size": 256 00:16:04.325 } 00:16:04.325 } 00:16:04.325 ] 00:16:04.325 }, 00:16:04.325 { 00:16:04.325 "subsystem": "keyring", 00:16:04.325 "config": [] 00:16:04.325 }, 00:16:04.325 { 00:16:04.325 "subsystem": "iobuf", 00:16:04.325 "config": [ 00:16:04.325 { 00:16:04.325 "method": "iobuf_set_options", 00:16:04.325 "params": { 00:16:04.325 "small_pool_count": 8192, 00:16:04.325 "large_pool_count": 1024, 00:16:04.325 "small_bufsize": 8192, 00:16:04.325 "large_bufsize": 135168, 00:16:04.325 "enable_numa": false 00:16:04.325 } 00:16:04.325 } 00:16:04.325 ] 00:16:04.325 }, 00:16:04.325 { 00:16:04.325 "subsystem": "sock", 00:16:04.325 "config": [ 00:16:04.326 { 00:16:04.326 "method": "sock_set_default_impl", 00:16:04.326 "params": { 00:16:04.326 "impl_name": "posix" 00:16:04.326 } 00:16:04.326 }, 00:16:04.326 { 00:16:04.326 "method": "sock_impl_set_options", 00:16:04.326 "params": { 00:16:04.326 "impl_name": "ssl", 00:16:04.326 "recv_buf_size": 4096, 00:16:04.326 "send_buf_size": 4096, 00:16:04.326 "enable_recv_pipe": true, 00:16:04.326 "enable_quickack": false, 00:16:04.326 "enable_placement_id": 0, 00:16:04.326 "enable_zerocopy_send_server": true, 00:16:04.326 "enable_zerocopy_send_client": false, 00:16:04.326 "zerocopy_threshold": 0, 00:16:04.326 "tls_version": 0, 00:16:04.326 "enable_ktls": false 00:16:04.326 } 00:16:04.326 }, 00:16:04.326 { 00:16:04.326 "method": "sock_impl_set_options", 00:16:04.326 "params": { 00:16:04.326 "impl_name": "posix", 00:16:04.326 "recv_buf_size": 2097152, 00:16:04.326 "send_buf_size": 2097152, 00:16:04.326 "enable_recv_pipe": true, 00:16:04.326 "enable_quickack": false, 00:16:04.326 "enable_placement_id": 0, 00:16:04.326 "enable_zerocopy_send_server": true, 00:16:04.326 "enable_zerocopy_send_client": false, 00:16:04.326 "zerocopy_threshold": 0, 00:16:04.326 "tls_version": 0, 00:16:04.326 "enable_ktls": false 00:16:04.326 } 00:16:04.326 } 00:16:04.326 ] 00:16:04.326 }, 00:16:04.326 { 00:16:04.326 "subsystem": "vmd", 00:16:04.326 "config": [] 00:16:04.326 }, 00:16:04.326 { 00:16:04.326 "subsystem": "accel", 00:16:04.326 "config": [ 00:16:04.326 { 00:16:04.326 "method": "accel_set_options", 00:16:04.326 "params": { 00:16:04.326 "small_cache_size": 128, 00:16:04.326 "large_cache_size": 16, 00:16:04.326 "task_count": 2048, 00:16:04.326 "sequence_count": 2048, 00:16:04.326 "buf_count": 2048 00:16:04.326 } 00:16:04.326 } 00:16:04.326 ] 00:16:04.326 }, 00:16:04.326 { 00:16:04.326 "subsystem": "bdev", 00:16:04.326 "config": [ 00:16:04.326 { 00:16:04.326 "method": "bdev_set_options", 00:16:04.326 "params": { 00:16:04.326 "bdev_io_pool_size": 65535, 00:16:04.326 "bdev_io_cache_size": 256, 00:16:04.326 "bdev_auto_examine": true, 00:16:04.326 "iobuf_small_cache_size": 128, 00:16:04.326 "iobuf_large_cache_size": 16 00:16:04.326 } 00:16:04.326 }, 00:16:04.326 { 00:16:04.326 "method": "bdev_raid_set_options", 00:16:04.326 "params": { 00:16:04.326 "process_window_size_kb": 1024, 00:16:04.326 "process_max_bandwidth_mb_sec": 0 00:16:04.326 } 00:16:04.326 }, 00:16:04.326 { 00:16:04.326 "method": "bdev_iscsi_set_options", 00:16:04.326 "params": { 00:16:04.326 "timeout_sec": 30 00:16:04.326 } 00:16:04.326 }, 00:16:04.326 { 00:16:04.326 "method": "bdev_nvme_set_options", 00:16:04.326 "params": { 00:16:04.326 "action_on_timeout": "none", 00:16:04.326 "timeout_us": 0, 00:16:04.326 "timeout_admin_us": 0, 00:16:04.326 "keep_alive_timeout_ms": 10000, 00:16:04.326 "arbitration_burst": 0, 00:16:04.326 "low_priority_weight": 0, 00:16:04.326 "medium_priority_weight": 0, 00:16:04.326 "high_priority_weight": 0, 00:16:04.326 "nvme_adminq_poll_period_us": 10000, 00:16:04.326 "nvme_ioq_poll_period_us": 0, 00:16:04.326 "io_queue_requests": 0, 00:16:04.326 "delay_cmd_submit": true, 00:16:04.326 "transport_retry_count": 4, 00:16:04.326 "bdev_retry_count": 3, 00:16:04.326 "transport_ack_timeout": 0, 00:16:04.326 "ctrlr_loss_timeout_sec": 0, 00:16:04.326 "reconnect_delay_sec": 0, 00:16:04.326 "fast_io_fail_timeout_sec": 0, 00:16:04.326 "disable_auto_failback": false, 00:16:04.326 "generate_uuids": false, 00:16:04.326 "transport_tos": 0, 00:16:04.326 "nvme_error_stat": false, 00:16:04.326 "rdma_srq_size": 0, 00:16:04.326 "io_path_stat": false, 00:16:04.326 "allow_accel_sequence": false, 00:16:04.326 "rdma_max_cq_size": 0, 00:16:04.326 "rdma_cm_event_timeout_ms": 0, 00:16:04.326 "dhchap_digests": [ 00:16:04.326 "sha256", 00:16:04.326 "sha384", 00:16:04.326 "sha512" 00:16:04.326 ], 00:16:04.326 "dhchap_dhgroups": [ 00:16:04.326 "null", 00:16:04.326 "ffdhe2048", 00:16:04.326 "ffdhe3072", 00:16:04.326 "ffdhe4096", 00:16:04.326 "ffdhe6144", 00:16:04.326 "ffdhe8192" 00:16:04.326 ], 00:16:04.326 "rdma_umr_per_io": false 00:16:04.326 } 00:16:04.326 }, 00:16:04.326 { 00:16:04.326 "method": "bdev_nvme_set_hotplug", 00:16:04.326 "params": { 00:16:04.326 "period_us": 100000, 00:16:04.326 "enable": false 00:16:04.326 } 00:16:04.326 }, 00:16:04.326 { 00:16:04.326 "method": "bdev_malloc_create", 00:16:04.326 "params": { 00:16:04.326 "name": "malloc0", 00:16:04.326 "num_blocks": 8192, 00:16:04.326 "block_size": 4096, 00:16:04.326 "physical_block_size": 4096, 00:16:04.326 "uuid": "d6835eb9-356a-4285-bcd8-118d41b77941", 00:16:04.326 "optimal_io_boundary": 0, 00:16:04.326 "md_size": 0, 00:16:04.326 "dif_type": 0, 00:16:04.326 "dif_is_head_of_md": false, 00:16:04.326 "dif_pi_format": 0 00:16:04.326 } 00:16:04.326 }, 00:16:04.326 { 00:16:04.326 "method": "bdev_wait_for_examine" 00:16:04.326 } 00:16:04.326 ] 00:16:04.326 }, 00:16:04.326 { 00:16:04.326 "subsystem": "scsi", 00:16:04.326 "config": null 00:16:04.326 }, 00:16:04.326 { 00:16:04.326 "subsystem": "scheduler", 00:16:04.326 "config": [ 00:16:04.326 { 00:16:04.326 "method": "framework_set_scheduler", 00:16:04.326 "params": { 00:16:04.326 "name": "static" 00:16:04.326 } 00:16:04.326 } 00:16:04.326 ] 00:16:04.326 }, 00:16:04.326 { 00:16:04.326 "subsystem": "vhost_scsi", 00:16:04.326 "config": [] 00:16:04.326 }, 00:16:04.326 { 00:16:04.326 "subsystem": "vhost_blk", 00:16:04.326 "config": [] 00:16:04.326 }, 00:16:04.326 { 00:16:04.326 "subsystem": "ublk", 00:16:04.326 "config": [ 00:16:04.326 { 00:16:04.326 "method": "ublk_create_target", 00:16:04.326 "params": { 00:16:04.326 "cpumask": "1" 00:16:04.326 } 00:16:04.326 }, 00:16:04.326 { 00:16:04.326 "method": "ublk_start_disk", 00:16:04.326 "params": { 00:16:04.326 "bdev_name": "malloc0", 00:16:04.326 "ublk_id": 0, 00:16:04.326 "num_queues": 1, 00:16:04.326 "queue_depth": 128 00:16:04.326 } 00:16:04.326 } 00:16:04.326 ] 00:16:04.326 }, 00:16:04.326 { 00:16:04.326 "subsystem": "nbd", 00:16:04.326 "config": [] 00:16:04.326 }, 00:16:04.326 { 00:16:04.326 "subsystem": "nvmf", 00:16:04.326 "config": [ 00:16:04.326 { 00:16:04.326 "method": "nvmf_set_config", 00:16:04.326 "params": { 00:16:04.326 "discovery_filter": "match_any", 00:16:04.326 "admin_cmd_passthru": { 00:16:04.326 "identify_ctrlr": false 00:16:04.326 }, 00:16:04.326 "dhchap_digests": [ 00:16:04.326 "sha256", 00:16:04.326 "sha384", 00:16:04.326 "sha512" 00:16:04.326 ], 00:16:04.326 "dhchap_dhgroups": [ 00:16:04.326 "null", 00:16:04.326 "ffdhe2048", 00:16:04.326 "ffdhe3072", 00:16:04.326 "ffdhe4096", 00:16:04.326 "ffdhe6144", 00:16:04.326 "ffdhe8192" 00:16:04.326 ] 00:16:04.326 } 00:16:04.326 }, 00:16:04.326 { 00:16:04.326 "method": "nvmf_set_max_subsystems", 00:16:04.326 "params": { 00:16:04.326 "max_subsystems": 1024 00:16:04.326 } 00:16:04.326 }, 00:16:04.326 { 00:16:04.326 "method": "nvmf_set_crdt", 00:16:04.326 "params": { 00:16:04.326 "crdt1": 0, 00:16:04.326 "crdt2": 0, 00:16:04.326 "crdt3": 0 00:16:04.326 } 00:16:04.326 } 00:16:04.326 ] 00:16:04.326 }, 00:16:04.326 { 00:16:04.326 "subsystem": "iscsi", 00:16:04.326 "config": [ 00:16:04.326 { 00:16:04.326 "method": "iscsi_set_options", 00:16:04.326 "params": { 00:16:04.326 "node_base": "iqn.2016-06.io.spdk", 00:16:04.326 "max_sessions": 128, 00:16:04.326 "max_connections_per_session": 2, 00:16:04.326 "max_queue_depth": 64, 00:16:04.326 "default_time2wait": 2, 00:16:04.326 "default_time2retain": 20, 00:16:04.326 "first_burst_length": 8192, 00:16:04.326 "immediate_data": true, 00:16:04.326 "allow_duplicated_isid": false, 00:16:04.326 "error_recovery_level": 0, 00:16:04.326 "nop_timeout": 60, 00:16:04.326 "nop_in_interval": 30, 00:16:04.326 "disable_chap": false, 00:16:04.326 "require_chap": false, 00:16:04.326 "mutual_chap": false, 00:16:04.326 "chap_group": 0, 00:16:04.326 "max_large_datain_per_connection": 64, 00:16:04.326 "max_r2t_per_connection": 4, 00:16:04.326 "pdu_pool_size": 36864, 00:16:04.326 "immediate_data_pool_size": 16384, 00:16:04.326 "data_out_pool_size": 2048 00:16:04.326 } 00:16:04.326 } 00:16:04.326 ] 00:16:04.326 } 00:16:04.326 ] 00:16:04.326 }' 00:16:04.326 01:15:37 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:04.326 [2024-12-14 01:15:37.715106] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:04.326 [2024-12-14 01:15:37.715255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86329 ] 00:16:04.326 [2024-12-14 01:15:37.863373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.326 [2024-12-14 01:15:37.892037] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.899 [2024-12-14 01:15:38.281645] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:04.899 [2024-12-14 01:15:38.282026] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:04.899 [2024-12-14 01:15:38.289780] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:04.899 [2024-12-14 01:15:38.289854] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:04.899 [2024-12-14 01:15:38.289862] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:04.899 [2024-12-14 01:15:38.289872] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:04.899 [2024-12-14 01:15:38.297789] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:04.899 [2024-12-14 01:15:38.297812] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:04.899 [2024-12-14 01:15:38.305662] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:04.899 [2024-12-14 01:15:38.305773] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:04.899 [2024-12-14 01:15:38.322660] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:05.160 01:15:38 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.160 01:15:38 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:05.160 01:15:38 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:05.160 01:15:38 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:05.160 01:15:38 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.160 01:15:38 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:05.160 01:15:38 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.160 01:15:38 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:05.160 01:15:38 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:05.160 01:15:38 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 86329 00:16:05.160 01:15:38 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 86329 ']' 00:16:05.160 01:15:38 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 86329 00:16:05.160 01:15:38 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:05.160 01:15:38 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:05.160 01:15:38 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86329 00:16:05.160 01:15:38 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:05.160 killing process with pid 86329 00:16:05.160 01:15:38 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:05.160 01:15:38 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86329' 00:16:05.160 01:15:38 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 86329 00:16:05.160 01:15:38 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 86329 00:16:05.422 [2024-12-14 01:15:38.908436] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:05.422 [2024-12-14 01:15:38.944679] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:05.422 [2024-12-14 01:15:38.944830] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:05.422 [2024-12-14 01:15:38.952665] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:05.422 [2024-12-14 01:15:38.952733] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:05.422 [2024-12-14 01:15:38.952748] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:05.422 [2024-12-14 01:15:38.952780] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:05.422 [2024-12-14 01:15:38.952930] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:05.994 01:15:39 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:05.994 00:16:05.994 real 0m3.820s 00:16:05.994 user 0m2.618s 00:16:05.994 sys 0m1.876s 00:16:05.994 01:15:39 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.994 ************************************ 00:16:05.994 END TEST test_save_ublk_config 00:16:05.994 ************************************ 00:16:05.994 01:15:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:05.994 01:15:39 ublk -- ublk/ublk.sh@139 -- # spdk_pid=86380 00:16:05.994 01:15:39 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:05.994 01:15:39 ublk -- ublk/ublk.sh@141 -- # waitforlisten 86380 00:16:05.994 01:15:39 ublk -- common/autotest_common.sh@835 -- # '[' -z 86380 ']' 00:16:05.994 01:15:39 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:05.994 01:15:39 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.994 01:15:39 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.994 01:15:39 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.994 01:15:39 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.994 01:15:39 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:05.994 [2024-12-14 01:15:39.544933] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:05.994 [2024-12-14 01:15:39.545091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86380 ] 00:16:06.255 [2024-12-14 01:15:39.692915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:06.255 [2024-12-14 01:15:39.725696] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.255 [2024-12-14 01:15:39.725705] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.828 01:15:40 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:06.828 01:15:40 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:06.828 01:15:40 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:06.828 01:15:40 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:06.828 01:15:40 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.828 01:15:40 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.828 ************************************ 00:16:06.828 START TEST test_create_ublk 00:16:06.828 ************************************ 00:16:06.828 01:15:40 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:06.828 01:15:40 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:06.828 01:15:40 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.828 01:15:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.828 [2024-12-14 01:15:40.414647] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:06.828 [2024-12-14 01:15:40.416295] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:06.828 01:15:40 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.829 01:15:40 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:06.829 01:15:40 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:06.829 01:15:40 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.829 01:15:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.090 01:15:40 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.090 01:15:40 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:07.090 01:15:40 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:07.090 01:15:40 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.090 01:15:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.090 [2024-12-14 01:15:40.507795] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:07.090 [2024-12-14 01:15:40.508252] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:07.090 [2024-12-14 01:15:40.508270] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:07.090 [2024-12-14 01:15:40.508279] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:07.090 [2024-12-14 01:15:40.515678] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:07.090 [2024-12-14 01:15:40.515717] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:07.090 [2024-12-14 01:15:40.523655] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:07.090 [2024-12-14 01:15:40.524356] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:07.090 [2024-12-14 01:15:40.546666] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:07.090 01:15:40 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.090 01:15:40 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:07.090 01:15:40 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:07.090 01:15:40 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:07.090 01:15:40 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.090 01:15:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.090 01:15:40 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.090 01:15:40 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:07.090 { 00:16:07.090 "ublk_device": "/dev/ublkb0", 00:16:07.090 "id": 0, 00:16:07.090 "queue_depth": 512, 00:16:07.090 "num_queues": 4, 00:16:07.090 "bdev_name": "Malloc0" 00:16:07.090 } 00:16:07.090 ]' 00:16:07.090 01:15:40 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:07.090 01:15:40 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:07.090 01:15:40 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:07.090 01:15:40 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:07.090 01:15:40 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:07.090 01:15:40 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:07.090 01:15:40 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:07.352 01:15:40 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:07.352 01:15:40 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:07.352 01:15:40 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:07.352 01:15:40 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:07.352 01:15:40 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:07.352 01:15:40 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:07.352 01:15:40 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:07.352 01:15:40 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:07.352 01:15:40 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:07.352 01:15:40 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:07.352 01:15:40 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:07.352 01:15:40 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:07.352 01:15:40 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:07.352 01:15:40 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:07.352 01:15:40 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:07.352 fio: verification read phase will never start because write phase uses all of runtime 00:16:07.352 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:07.352 fio-3.35 00:16:07.352 Starting 1 process 00:16:17.429 00:16:17.429 fio_test: (groupid=0, jobs=1): err= 0: pid=86425: Sat Dec 14 01:15:50 2024 00:16:17.429 write: IOPS=17.3k, BW=67.5MiB/s (70.8MB/s)(675MiB/10001msec); 0 zone resets 00:16:17.429 clat (usec): min=32, max=4228, avg=57.07, stdev=83.00 00:16:17.429 lat (usec): min=33, max=4252, avg=57.53, stdev=83.03 00:16:17.429 clat percentiles (usec): 00:16:17.429 | 1.00th=[ 38], 5.00th=[ 41], 10.00th=[ 43], 20.00th=[ 45], 00:16:17.429 | 30.00th=[ 50], 40.00th=[ 53], 50.00th=[ 55], 60.00th=[ 57], 00:16:17.429 | 70.00th=[ 58], 80.00th=[ 61], 90.00th=[ 66], 95.00th=[ 72], 00:16:17.429 | 99.00th=[ 84], 99.50th=[ 94], 99.90th=[ 1336], 99.95th=[ 2474], 00:16:17.429 | 99.99th=[ 3490] 00:16:17.429 bw ( KiB/s): min=57664, max=86976, per=100.00%, avg=69300.21, stdev=8974.95, samples=19 00:16:17.429 iops : min=14416, max=21744, avg=17325.16, stdev=2243.67, samples=19 00:16:17.429 lat (usec) : 50=30.78%, 100=68.78%, 250=0.26%, 500=0.04%, 750=0.01% 00:16:17.429 lat (usec) : 1000=0.01% 00:16:17.429 lat (msec) : 2=0.04%, 4=0.07%, 10=0.01% 00:16:17.429 cpu : usr=3.31%, sys=13.64%, ctx=172843, majf=0, minf=795 00:16:17.429 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:17.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.429 issued rwts: total=0,172854,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.429 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:17.429 00:16:17.429 Run status group 0 (all jobs): 00:16:17.429 WRITE: bw=67.5MiB/s (70.8MB/s), 67.5MiB/s-67.5MiB/s (70.8MB/s-70.8MB/s), io=675MiB (708MB), run=10001-10001msec 00:16:17.429 00:16:17.429 Disk stats (read/write): 00:16:17.429 ublkb0: ios=0/171113, merge=0/0, ticks=0/8283, in_queue=8284, util=99.09% 00:16:17.429 01:15:50 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:17.429 01:15:50 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.429 01:15:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:17.429 [2024-12-14 01:15:50.973618] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:17.429 [2024-12-14 01:15:51.014249] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:17.429 [2024-12-14 01:15:51.015155] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:17.429 [2024-12-14 01:15:51.019680] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:17.429 [2024-12-14 01:15:51.019919] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:17.429 [2024-12-14 01:15:51.019927] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:17.429 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.429 01:15:51 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:17.429 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:17.429 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:17.429 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:17.429 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.429 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:17.429 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.429 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:17.429 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.429 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:17.429 [2024-12-14 01:15:51.035746] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:17.687 request: 00:16:17.687 { 00:16:17.687 "ublk_id": 0, 00:16:17.687 "method": "ublk_stop_disk", 00:16:17.687 "req_id": 1 00:16:17.687 } 00:16:17.687 Got JSON-RPC error response 00:16:17.687 response: 00:16:17.687 { 00:16:17.687 "code": -19, 00:16:17.687 "message": "No such device" 00:16:17.687 } 00:16:17.687 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:17.687 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:17.687 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:17.687 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:17.687 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:17.687 01:15:51 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:17.687 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.688 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:17.688 [2024-12-14 01:15:51.051724] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:17.688 [2024-12-14 01:15:51.053676] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:17.688 [2024-12-14 01:15:51.053708] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:17.688 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.688 01:15:51 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:17.688 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.688 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:17.688 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.688 01:15:51 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:17.688 01:15:51 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:17.688 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.688 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:17.688 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.688 01:15:51 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:17.688 01:15:51 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:17.688 01:15:51 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:17.688 01:15:51 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:17.688 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.688 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:17.688 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.688 01:15:51 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:17.688 01:15:51 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:17.688 01:15:51 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:17.688 00:16:17.688 real 0m10.825s 00:16:17.688 user 0m0.638s 00:16:17.688 sys 0m1.456s 00:16:17.688 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.688 01:15:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:17.688 ************************************ 00:16:17.688 END TEST test_create_ublk 00:16:17.688 ************************************ 00:16:17.688 01:15:51 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:17.688 01:15:51 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:17.688 01:15:51 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.688 01:15:51 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:17.688 ************************************ 00:16:17.688 START TEST test_create_multi_ublk 00:16:17.688 ************************************ 00:16:17.688 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:17.688 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:17.688 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.688 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:17.688 [2024-12-14 01:15:51.279641] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:17.688 [2024-12-14 01:15:51.280793] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:17.688 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.688 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:17.688 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:17.688 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:17.688 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:17.688 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.688 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:17.946 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.946 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:17.946 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:17.946 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.946 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:17.946 [2024-12-14 01:15:51.375781] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:17.946 [2024-12-14 01:15:51.376097] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:17.946 [2024-12-14 01:15:51.376111] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:17.946 [2024-12-14 01:15:51.376116] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:17.946 [2024-12-14 01:15:51.387914] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:17.946 [2024-12-14 01:15:51.387928] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:17.946 [2024-12-14 01:15:51.407648] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:17.946 [2024-12-14 01:15:51.408171] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:17.946 [2024-12-14 01:15:51.453658] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:17.946 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.946 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:17.946 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:17.946 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:17.946 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.946 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:17.946 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.946 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:17.946 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:17.946 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.946 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.205 [2024-12-14 01:15:51.561755] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:18.205 [2024-12-14 01:15:51.562066] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:18.205 [2024-12-14 01:15:51.562078] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:18.205 [2024-12-14 01:15:51.562085] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:18.205 [2024-12-14 01:15:51.573658] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:18.205 [2024-12-14 01:15:51.573680] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:18.205 [2024-12-14 01:15:51.585642] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:18.205 [2024-12-14 01:15:51.586157] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:18.205 [2024-12-14 01:15:51.596645] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:18.205 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.205 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:18.205 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:18.205 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:18.205 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.205 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.205 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.205 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:18.205 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:18.205 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.205 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.205 [2024-12-14 01:15:51.692745] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:18.205 [2024-12-14 01:15:51.693066] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:18.205 [2024-12-14 01:15:51.693075] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:18.205 [2024-12-14 01:15:51.693080] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:18.205 [2024-12-14 01:15:51.704672] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:18.205 [2024-12-14 01:15:51.704689] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:18.205 [2024-12-14 01:15:51.716650] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:18.205 [2024-12-14 01:15:51.717160] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:18.205 [2024-12-14 01:15:51.741656] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:18.205 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.205 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:18.205 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:18.205 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:18.205 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.205 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.463 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.463 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:18.463 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:18.463 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.463 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.463 [2024-12-14 01:15:51.848732] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:18.463 [2024-12-14 01:15:51.849053] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:18.463 [2024-12-14 01:15:51.849064] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:18.463 [2024-12-14 01:15:51.849071] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:18.463 [2024-12-14 01:15:51.860654] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:18.463 [2024-12-14 01:15:51.860677] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:18.463 [2024-12-14 01:15:51.872643] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:18.463 [2024-12-14 01:15:51.873168] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:18.463 [2024-12-14 01:15:51.879668] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:18.463 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.463 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:18.463 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:18.463 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.463 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.463 01:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.463 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:18.463 { 00:16:18.463 "ublk_device": "/dev/ublkb0", 00:16:18.463 "id": 0, 00:16:18.463 "queue_depth": 512, 00:16:18.463 "num_queues": 4, 00:16:18.463 "bdev_name": "Malloc0" 00:16:18.463 }, 00:16:18.463 { 00:16:18.463 "ublk_device": "/dev/ublkb1", 00:16:18.463 "id": 1, 00:16:18.463 "queue_depth": 512, 00:16:18.463 "num_queues": 4, 00:16:18.463 "bdev_name": "Malloc1" 00:16:18.463 }, 00:16:18.463 { 00:16:18.463 "ublk_device": "/dev/ublkb2", 00:16:18.463 "id": 2, 00:16:18.463 "queue_depth": 512, 00:16:18.463 "num_queues": 4, 00:16:18.463 "bdev_name": "Malloc2" 00:16:18.463 }, 00:16:18.463 { 00:16:18.463 "ublk_device": "/dev/ublkb3", 00:16:18.463 "id": 3, 00:16:18.463 "queue_depth": 512, 00:16:18.463 "num_queues": 4, 00:16:18.463 "bdev_name": "Malloc3" 00:16:18.463 } 00:16:18.463 ]' 00:16:18.463 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:18.463 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:18.463 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:18.463 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:18.463 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:18.463 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:18.463 01:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:18.463 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:18.463 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:18.463 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:18.463 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:18.463 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:18.463 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:18.463 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:18.722 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:18.722 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:18.722 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:18.722 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:18.722 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:18.722 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:18.722 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:18.722 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:18.722 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:18.722 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:18.722 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:18.722 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:18.722 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:18.722 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:18.722 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:18.722 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:18.722 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:18.980 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:18.980 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:18.980 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:18.980 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:18.980 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:18.980 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:18.980 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:18.980 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:18.980 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:18.980 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:18.980 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:18.980 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:18.980 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:18.980 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:18.980 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:18.980 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:18.980 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:18.980 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:18.980 01:15:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.980 01:15:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.980 [2024-12-14 01:15:52.548722] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:18.980 [2024-12-14 01:15:52.589200] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:18.980 [2024-12-14 01:15:52.590322] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:19.238 [2024-12-14 01:15:52.600651] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:19.238 [2024-12-14 01:15:52.600896] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:19.238 [2024-12-14 01:15:52.600908] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:19.238 01:15:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.238 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:19.238 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:19.238 01:15:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.239 01:15:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:19.239 [2024-12-14 01:15:52.616728] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:19.239 [2024-12-14 01:15:52.654189] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:19.239 [2024-12-14 01:15:52.655287] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:19.239 [2024-12-14 01:15:52.664644] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:19.239 [2024-12-14 01:15:52.664920] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:19.239 [2024-12-14 01:15:52.664931] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:19.239 01:15:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.239 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:19.239 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:19.239 01:15:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.239 01:15:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:19.239 [2024-12-14 01:15:52.680717] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:19.239 [2024-12-14 01:15:52.716689] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:19.239 [2024-12-14 01:15:52.717426] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:19.239 [2024-12-14 01:15:52.724651] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:19.239 [2024-12-14 01:15:52.724891] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:19.239 [2024-12-14 01:15:52.724901] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:19.239 01:15:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.239 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:19.239 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:19.239 01:15:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.239 01:15:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:19.239 [2024-12-14 01:15:52.740715] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:19.239 [2024-12-14 01:15:52.783142] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:19.239 [2024-12-14 01:15:52.784149] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:19.239 [2024-12-14 01:15:52.788661] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:19.239 [2024-12-14 01:15:52.788889] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:19.239 [2024-12-14 01:15:52.788894] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:19.239 01:15:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.239 01:15:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:19.498 [2024-12-14 01:15:52.996700] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:19.498 [2024-12-14 01:15:52.998265] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:19.498 [2024-12-14 01:15:52.998295] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:19.498 01:15:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:19.498 01:15:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:19.498 01:15:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:19.498 01:15:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.498 01:15:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:19.498 01:15:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.498 01:15:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:19.498 01:15:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:19.498 01:15:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.498 01:15:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:19.756 01:15:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.756 01:15:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:19.756 01:15:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:19.756 01:15:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.756 01:15:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:19.756 01:15:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.756 01:15:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:19.756 01:15:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:19.756 01:15:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.756 01:15:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:19.756 01:15:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.756 01:15:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:19.756 01:15:53 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:19.756 01:15:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.756 01:15:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:19.756 01:15:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.756 01:15:53 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:19.756 01:15:53 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:19.756 01:15:53 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:19.756 01:15:53 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:19.756 01:15:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.756 01:15:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:20.014 01:15:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.014 01:15:53 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:20.014 01:15:53 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:20.014 01:15:53 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:20.014 00:16:20.014 real 0m2.140s 00:16:20.014 user 0m0.815s 00:16:20.014 sys 0m0.141s 00:16:20.014 01:15:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.014 ************************************ 00:16:20.015 END TEST test_create_multi_ublk 00:16:20.015 ************************************ 00:16:20.015 01:15:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:20.015 01:15:53 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:20.015 01:15:53 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:20.015 01:15:53 ublk -- ublk/ublk.sh@130 -- # killprocess 86380 00:16:20.015 01:15:53 ublk -- common/autotest_common.sh@954 -- # '[' -z 86380 ']' 00:16:20.015 01:15:53 ublk -- common/autotest_common.sh@958 -- # kill -0 86380 00:16:20.015 01:15:53 ublk -- common/autotest_common.sh@959 -- # uname 00:16:20.015 01:15:53 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:20.015 01:15:53 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86380 00:16:20.015 01:15:53 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:20.015 01:15:53 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:20.015 01:15:53 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86380' 00:16:20.015 killing process with pid 86380 00:16:20.015 01:15:53 ublk -- common/autotest_common.sh@973 -- # kill 86380 00:16:20.015 01:15:53 ublk -- common/autotest_common.sh@978 -- # wait 86380 00:16:20.273 [2024-12-14 01:15:53.690226] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:20.273 [2024-12-14 01:15:53.690300] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:20.532 00:16:20.532 real 0m18.635s 00:16:20.532 user 0m28.702s 00:16:20.532 sys 0m8.039s 00:16:20.532 01:15:54 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.532 ************************************ 00:16:20.532 END TEST ublk 00:16:20.532 ************************************ 00:16:20.532 01:15:54 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:20.532 01:15:54 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:20.532 01:15:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:20.532 01:15:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.532 01:15:54 -- common/autotest_common.sh@10 -- # set +x 00:16:20.532 ************************************ 00:16:20.532 START TEST ublk_recovery 00:16:20.532 ************************************ 00:16:20.532 01:15:54 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:20.793 * Looking for test storage... 00:16:20.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:20.793 01:15:54 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:20.793 01:15:54 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:16:20.793 01:15:54 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:20.793 01:15:54 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:20.793 01:15:54 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:20.793 01:15:54 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.793 01:15:54 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:20.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.793 --rc genhtml_branch_coverage=1 00:16:20.793 --rc genhtml_function_coverage=1 00:16:20.793 --rc genhtml_legend=1 00:16:20.793 --rc geninfo_all_blocks=1 00:16:20.793 --rc geninfo_unexecuted_blocks=1 00:16:20.793 00:16:20.793 ' 00:16:20.793 01:15:54 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:20.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.793 --rc genhtml_branch_coverage=1 00:16:20.793 --rc genhtml_function_coverage=1 00:16:20.793 --rc genhtml_legend=1 00:16:20.793 --rc geninfo_all_blocks=1 00:16:20.793 --rc geninfo_unexecuted_blocks=1 00:16:20.793 00:16:20.793 ' 00:16:20.793 01:15:54 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:20.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.793 --rc genhtml_branch_coverage=1 00:16:20.793 --rc genhtml_function_coverage=1 00:16:20.793 --rc genhtml_legend=1 00:16:20.793 --rc geninfo_all_blocks=1 00:16:20.793 --rc geninfo_unexecuted_blocks=1 00:16:20.793 00:16:20.793 ' 00:16:20.793 01:15:54 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:20.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.793 --rc genhtml_branch_coverage=1 00:16:20.793 --rc genhtml_function_coverage=1 00:16:20.793 --rc genhtml_legend=1 00:16:20.793 --rc geninfo_all_blocks=1 00:16:20.793 --rc geninfo_unexecuted_blocks=1 00:16:20.793 00:16:20.793 ' 00:16:20.793 01:15:54 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:20.793 01:15:54 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:20.793 01:15:54 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:20.793 01:15:54 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:20.793 01:15:54 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:20.793 01:15:54 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:20.793 01:15:54 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:20.793 01:15:54 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:20.793 01:15:54 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:20.793 01:15:54 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:20.793 01:15:54 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=86752 00:16:20.793 01:15:54 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:20.793 01:15:54 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 86752 00:16:20.793 01:15:54 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:20.793 01:15:54 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 86752 ']' 00:16:20.793 01:15:54 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.793 01:15:54 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.793 01:15:54 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.793 01:15:54 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.793 01:15:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.793 [2024-12-14 01:15:54.301666] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:20.793 [2024-12-14 01:15:54.301796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86752 ] 00:16:21.052 [2024-12-14 01:15:54.444662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:21.052 [2024-12-14 01:15:54.481333] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.052 [2024-12-14 01:15:54.481335] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.617 01:15:55 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.617 01:15:55 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:21.617 01:15:55 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:21.617 01:15:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.617 01:15:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.617 [2024-12-14 01:15:55.145640] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:21.617 [2024-12-14 01:15:55.146903] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:21.617 01:15:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.617 01:15:55 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:21.617 01:15:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.617 01:15:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.617 malloc0 00:16:21.617 01:15:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.617 01:15:55 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:21.617 01:15:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.617 01:15:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.617 [2024-12-14 01:15:55.185748] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:21.617 [2024-12-14 01:15:55.185832] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:21.617 [2024-12-14 01:15:55.185838] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:21.617 [2024-12-14 01:15:55.185846] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:21.617 [2024-12-14 01:15:55.193786] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:21.617 [2024-12-14 01:15:55.193805] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:21.617 [2024-12-14 01:15:55.201647] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:21.617 [2024-12-14 01:15:55.201772] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:21.617 [2024-12-14 01:15:55.216658] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:21.617 1 00:16:21.617 01:15:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.617 01:15:55 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:22.989 01:15:56 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=86785 00:16:22.989 01:15:56 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:22.989 01:15:56 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:22.989 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:22.989 fio-3.35 00:16:22.989 Starting 1 process 00:16:28.267 01:16:01 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 86752 00:16:28.267 01:16:01 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:33.554 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 86752 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:33.554 01:16:06 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=86897 00:16:33.554 01:16:06 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:33.554 01:16:06 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 86897 00:16:33.554 01:16:06 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:33.554 01:16:06 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 86897 ']' 00:16:33.554 01:16:06 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.554 01:16:06 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.554 01:16:06 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.554 01:16:06 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.554 01:16:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:33.554 [2024-12-14 01:16:06.313466] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:33.554 [2024-12-14 01:16:06.313590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86897 ] 00:16:33.554 [2024-12-14 01:16:06.453470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:33.554 [2024-12-14 01:16:06.483128] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.554 [2024-12-14 01:16:06.483205] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.554 01:16:07 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.554 01:16:07 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:33.554 01:16:07 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:33.554 01:16:07 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.554 01:16:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:33.554 [2024-12-14 01:16:07.163643] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:33.815 [2024-12-14 01:16:07.164976] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:33.815 01:16:07 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.815 01:16:07 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:33.815 01:16:07 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.815 01:16:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:33.815 malloc0 00:16:33.815 01:16:07 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.815 01:16:07 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:33.815 01:16:07 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.815 01:16:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:33.815 [2024-12-14 01:16:07.203770] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:33.815 [2024-12-14 01:16:07.203812] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:33.815 [2024-12-14 01:16:07.203821] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:33.815 [2024-12-14 01:16:07.211687] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:33.815 [2024-12-14 01:16:07.211705] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:16:33.815 [2024-12-14 01:16:07.211718] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:33.815 [2024-12-14 01:16:07.211796] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:33.815 1 00:16:33.815 01:16:07 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.815 01:16:07 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 86785 00:16:33.815 [2024-12-14 01:16:07.219657] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:33.815 [2024-12-14 01:16:07.222867] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:33.815 [2024-12-14 01:16:07.227892] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:33.815 [2024-12-14 01:16:07.227913] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:30.045 00:17:30.045 fio_test: (groupid=0, jobs=1): err= 0: pid=86788: Sat Dec 14 01:16:56 2024 00:17:30.045 read: IOPS=26.4k, BW=103MiB/s (108MB/s)(6186MiB/60002msec) 00:17:30.045 slat (nsec): min=1106, max=134991, avg=5065.14, stdev=1368.32 00:17:30.045 clat (usec): min=651, max=6006.8k, avg=2376.04, stdev=37870.33 00:17:30.045 lat (usec): min=656, max=6006.8k, avg=2381.10, stdev=37870.34 00:17:30.045 clat percentiles (usec): 00:17:30.045 | 1.00th=[ 1778], 5.00th=[ 1876], 10.00th=[ 1909], 20.00th=[ 1926], 00:17:30.045 | 30.00th=[ 1942], 40.00th=[ 1958], 50.00th=[ 1991], 60.00th=[ 2008], 00:17:30.045 | 70.00th=[ 2057], 80.00th=[ 2114], 90.00th=[ 2409], 95.00th=[ 2933], 00:17:30.045 | 99.00th=[ 4948], 99.50th=[ 5866], 99.90th=[ 7635], 99.95th=[ 8586], 00:17:30.045 | 99.99th=[12780] 00:17:30.045 bw ( KiB/s): min=17304, max=125032, per=100.00%, avg=116293.11, stdev=16007.57, samples=108 00:17:30.045 iops : min= 4326, max=31258, avg=29073.26, stdev=4001.90, samples=108 00:17:30.045 write: IOPS=26.4k, BW=103MiB/s (108MB/s)(6180MiB/60002msec); 0 zone resets 00:17:30.045 slat (nsec): min=1138, max=6685.8k, avg=5167.52, stdev=5783.07 00:17:30.045 clat (usec): min=639, max=6007.0k, avg=2465.05, stdev=38486.26 00:17:30.045 lat (usec): min=644, max=6007.0k, avg=2470.22, stdev=38486.26 00:17:30.045 clat percentiles (usec): 00:17:30.045 | 1.00th=[ 1827], 5.00th=[ 1958], 10.00th=[ 1991], 20.00th=[ 2024], 00:17:30.045 | 30.00th=[ 2040], 40.00th=[ 2057], 50.00th=[ 2073], 60.00th=[ 2114], 00:17:30.045 | 70.00th=[ 2147], 80.00th=[ 2212], 90.00th=[ 2442], 95.00th=[ 2900], 00:17:30.045 | 99.00th=[ 4948], 99.50th=[ 6063], 99.90th=[ 7701], 99.95th=[ 8717], 00:17:30.045 | 99.99th=[12780] 00:17:30.045 bw ( KiB/s): min=17056, max=124736, per=100.00%, avg=116167.93, stdev=16012.07, samples=108 00:17:30.045 iops : min= 4264, max=31184, avg=29041.98, stdev=4003.02, samples=108 00:17:30.045 lat (usec) : 750=0.01%, 1000=0.01% 00:17:30.045 lat (msec) : 2=34.88%, 4=62.63%, 10=2.47%, 20=0.01%, >=2000=0.01% 00:17:30.045 cpu : usr=5.99%, sys=27.27%, ctx=103887, majf=0, minf=13 00:17:30.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:30.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:30.045 issued rwts: total=1583576,1582027,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:30.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:30.045 00:17:30.045 Run status group 0 (all jobs): 00:17:30.045 READ: bw=103MiB/s (108MB/s), 103MiB/s-103MiB/s (108MB/s-108MB/s), io=6186MiB (6486MB), run=60002-60002msec 00:17:30.045 WRITE: bw=103MiB/s (108MB/s), 103MiB/s-103MiB/s (108MB/s-108MB/s), io=6180MiB (6480MB), run=60002-60002msec 00:17:30.045 00:17:30.045 Disk stats (read/write): 00:17:30.045 ublkb1: ios=1580381/1578777, merge=0/0, ticks=3673587/3678434, in_queue=7352022, util=99.91% 00:17:30.045 01:16:56 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:30.045 01:16:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.045 01:16:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.045 [2024-12-14 01:16:56.474466] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:30.045 [2024-12-14 01:16:56.501767] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:30.045 [2024-12-14 01:16:56.501923] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:30.045 [2024-12-14 01:16:56.509657] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:30.045 [2024-12-14 01:16:56.509763] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:30.045 [2024-12-14 01:16:56.509772] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:30.045 01:16:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.045 01:16:56 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:30.045 01:16:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.045 01:16:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.045 [2024-12-14 01:16:56.523735] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:30.045 [2024-12-14 01:16:56.525052] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:30.045 [2024-12-14 01:16:56.525084] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:30.045 01:16:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.045 01:16:56 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:30.045 01:16:56 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:30.045 01:16:56 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 86897 00:17:30.045 01:16:56 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 86897 ']' 00:17:30.045 01:16:56 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 86897 00:17:30.045 01:16:56 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:30.045 01:16:56 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:30.045 01:16:56 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86897 00:17:30.045 01:16:56 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:30.045 01:16:56 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:30.045 killing process with pid 86897 00:17:30.045 01:16:56 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86897' 00:17:30.045 01:16:56 ublk_recovery -- common/autotest_common.sh@973 -- # kill 86897 00:17:30.045 01:16:56 ublk_recovery -- common/autotest_common.sh@978 -- # wait 86897 00:17:30.045 [2024-12-14 01:16:56.730350] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:30.045 [2024-12-14 01:16:56.730411] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:30.045 ************************************ 00:17:30.045 END TEST ublk_recovery 00:17:30.045 ************************************ 00:17:30.045 00:17:30.045 real 1m2.930s 00:17:30.045 user 1m44.227s 00:17:30.045 sys 0m31.013s 00:17:30.045 01:16:57 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.045 01:16:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.045 01:16:57 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:30.045 01:16:57 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:30.045 01:16:57 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:30.045 01:16:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:30.045 01:16:57 -- common/autotest_common.sh@10 -- # set +x 00:17:30.045 01:16:57 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:30.045 01:16:57 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:30.045 01:16:57 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:30.045 01:16:57 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:30.045 01:16:57 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:30.045 01:16:57 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:30.045 01:16:57 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:30.045 01:16:57 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:30.045 01:16:57 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:30.045 01:16:57 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:30.045 01:16:57 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:30.045 01:16:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:30.045 01:16:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.045 01:16:57 -- common/autotest_common.sh@10 -- # set +x 00:17:30.045 ************************************ 00:17:30.045 START TEST ftl 00:17:30.045 ************************************ 00:17:30.045 01:16:57 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:30.045 * Looking for test storage... 00:17:30.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:30.045 01:16:57 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:30.045 01:16:57 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:17:30.045 01:16:57 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:30.045 01:16:57 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:30.045 01:16:57 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:30.045 01:16:57 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:30.045 01:16:57 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:30.045 01:16:57 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:30.045 01:16:57 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:30.045 01:16:57 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:30.045 01:16:57 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:30.045 01:16:57 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:30.045 01:16:57 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:30.045 01:16:57 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:30.045 01:16:57 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:30.045 01:16:57 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:30.045 01:16:57 ftl -- scripts/common.sh@345 -- # : 1 00:17:30.045 01:16:57 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:30.045 01:16:57 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:30.045 01:16:57 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:30.045 01:16:57 ftl -- scripts/common.sh@353 -- # local d=1 00:17:30.045 01:16:57 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:30.045 01:16:57 ftl -- scripts/common.sh@355 -- # echo 1 00:17:30.045 01:16:57 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:30.046 01:16:57 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:30.046 01:16:57 ftl -- scripts/common.sh@353 -- # local d=2 00:17:30.046 01:16:57 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:30.046 01:16:57 ftl -- scripts/common.sh@355 -- # echo 2 00:17:30.046 01:16:57 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:30.046 01:16:57 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:30.046 01:16:57 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:30.046 01:16:57 ftl -- scripts/common.sh@368 -- # return 0 00:17:30.046 01:16:57 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:30.046 01:16:57 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:30.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.046 --rc genhtml_branch_coverage=1 00:17:30.046 --rc genhtml_function_coverage=1 00:17:30.046 --rc genhtml_legend=1 00:17:30.046 --rc geninfo_all_blocks=1 00:17:30.046 --rc geninfo_unexecuted_blocks=1 00:17:30.046 00:17:30.046 ' 00:17:30.046 01:16:57 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:30.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.046 --rc genhtml_branch_coverage=1 00:17:30.046 --rc genhtml_function_coverage=1 00:17:30.046 --rc genhtml_legend=1 00:17:30.046 --rc geninfo_all_blocks=1 00:17:30.046 --rc geninfo_unexecuted_blocks=1 00:17:30.046 00:17:30.046 ' 00:17:30.046 01:16:57 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:30.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.046 --rc genhtml_branch_coverage=1 00:17:30.046 --rc genhtml_function_coverage=1 00:17:30.046 --rc genhtml_legend=1 00:17:30.046 --rc geninfo_all_blocks=1 00:17:30.046 --rc geninfo_unexecuted_blocks=1 00:17:30.046 00:17:30.046 ' 00:17:30.046 01:16:57 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:30.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.046 --rc genhtml_branch_coverage=1 00:17:30.046 --rc genhtml_function_coverage=1 00:17:30.046 --rc genhtml_legend=1 00:17:30.046 --rc geninfo_all_blocks=1 00:17:30.046 --rc geninfo_unexecuted_blocks=1 00:17:30.046 00:17:30.046 ' 00:17:30.046 01:16:57 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:30.046 01:16:57 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:30.046 01:16:57 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:30.046 01:16:57 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:30.046 01:16:57 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:30.046 01:16:57 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:30.046 01:16:57 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.046 01:16:57 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:30.046 01:16:57 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:30.046 01:16:57 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.046 01:16:57 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.046 01:16:57 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:30.046 01:16:57 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:30.046 01:16:57 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:30.046 01:16:57 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:30.046 01:16:57 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:30.046 01:16:57 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:30.046 01:16:57 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.046 01:16:57 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.046 01:16:57 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:30.046 01:16:57 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:30.046 01:16:57 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:30.046 01:16:57 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:30.046 01:16:57 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:30.046 01:16:57 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:30.046 01:16:57 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:30.046 01:16:57 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:30.046 01:16:57 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:30.046 01:16:57 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:30.046 01:16:57 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.046 01:16:57 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:30.046 01:16:57 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:30.046 01:16:57 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:30.046 01:16:57 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:30.046 01:16:57 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:30.046 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:30.046 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:30.046 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:30.046 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:30.046 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:30.046 01:16:57 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=87692 00:17:30.046 01:16:57 ftl -- ftl/ftl.sh@38 -- # waitforlisten 87692 00:17:30.046 01:16:57 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:30.046 01:16:57 ftl -- common/autotest_common.sh@835 -- # '[' -z 87692 ']' 00:17:30.046 01:16:57 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.046 01:16:57 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.046 01:16:57 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.046 01:16:57 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.046 01:16:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:30.046 [2024-12-14 01:16:57.888492] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:30.046 [2024-12-14 01:16:57.888657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87692 ] 00:17:30.046 [2024-12-14 01:16:58.033557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.046 [2024-12-14 01:16:58.062934] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.046 01:16:58 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.046 01:16:58 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:30.046 01:16:58 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:30.046 01:16:58 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:30.046 01:16:59 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:30.046 01:16:59 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:30.046 01:16:59 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:30.046 01:16:59 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:30.046 01:16:59 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:30.046 01:17:00 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:30.046 01:17:00 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:30.046 01:17:00 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:30.046 01:17:00 ftl -- ftl/ftl.sh@50 -- # break 00:17:30.046 01:17:00 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:30.046 01:17:00 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:30.046 01:17:00 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:30.046 01:17:00 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:30.046 01:17:00 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:30.046 01:17:00 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:30.046 01:17:00 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:30.046 01:17:00 ftl -- ftl/ftl.sh@63 -- # break 00:17:30.046 01:17:00 ftl -- ftl/ftl.sh@66 -- # killprocess 87692 00:17:30.046 01:17:00 ftl -- common/autotest_common.sh@954 -- # '[' -z 87692 ']' 00:17:30.046 01:17:00 ftl -- common/autotest_common.sh@958 -- # kill -0 87692 00:17:30.046 01:17:00 ftl -- common/autotest_common.sh@959 -- # uname 00:17:30.046 01:17:00 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:30.046 01:17:00 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87692 00:17:30.046 killing process with pid 87692 00:17:30.046 01:17:00 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:30.046 01:17:00 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:30.046 01:17:00 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87692' 00:17:30.046 01:17:00 ftl -- common/autotest_common.sh@973 -- # kill 87692 00:17:30.046 01:17:00 ftl -- common/autotest_common.sh@978 -- # wait 87692 00:17:30.046 01:17:00 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:30.046 01:17:00 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:30.046 01:17:00 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:30.046 01:17:00 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.046 01:17:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:30.046 ************************************ 00:17:30.046 START TEST ftl_fio_basic 00:17:30.046 ************************************ 00:17:30.046 01:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:30.046 * Looking for test storage... 00:17:30.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:30.046 01:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:30.046 01:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:17:30.046 01:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:30.046 01:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:30.046 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:30.046 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:30.046 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:30.046 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:30.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.047 --rc genhtml_branch_coverage=1 00:17:30.047 --rc genhtml_function_coverage=1 00:17:30.047 --rc genhtml_legend=1 00:17:30.047 --rc geninfo_all_blocks=1 00:17:30.047 --rc geninfo_unexecuted_blocks=1 00:17:30.047 00:17:30.047 ' 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:30.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.047 --rc genhtml_branch_coverage=1 00:17:30.047 --rc genhtml_function_coverage=1 00:17:30.047 --rc genhtml_legend=1 00:17:30.047 --rc geninfo_all_blocks=1 00:17:30.047 --rc geninfo_unexecuted_blocks=1 00:17:30.047 00:17:30.047 ' 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:30.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.047 --rc genhtml_branch_coverage=1 00:17:30.047 --rc genhtml_function_coverage=1 00:17:30.047 --rc genhtml_legend=1 00:17:30.047 --rc geninfo_all_blocks=1 00:17:30.047 --rc geninfo_unexecuted_blocks=1 00:17:30.047 00:17:30.047 ' 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:30.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.047 --rc genhtml_branch_coverage=1 00:17:30.047 --rc genhtml_function_coverage=1 00:17:30.047 --rc genhtml_legend=1 00:17:30.047 --rc geninfo_all_blocks=1 00:17:30.047 --rc geninfo_unexecuted_blocks=1 00:17:30.047 00:17:30.047 ' 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=87808 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 87808 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 87808 ']' 00:17:30.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.047 01:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:30.047 [2024-12-14 01:17:00.929198] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:30.047 [2024-12-14 01:17:00.929354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87808 ] 00:17:30.047 [2024-12-14 01:17:01.080322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:30.047 [2024-12-14 01:17:01.112051] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.047 [2024-12-14 01:17:01.112389] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.048 [2024-12-14 01:17:01.112440] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.048 01:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.048 01:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:30.048 01:17:01 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:30.048 01:17:01 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:30.048 01:17:01 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:30.048 01:17:01 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:30.048 01:17:01 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:30.048 01:17:01 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:30.048 { 00:17:30.048 "name": "nvme0n1", 00:17:30.048 "aliases": [ 00:17:30.048 "94aaac3f-8619-49bc-aee3-e3479d1f9e5b" 00:17:30.048 ], 00:17:30.048 "product_name": "NVMe disk", 00:17:30.048 "block_size": 4096, 00:17:30.048 "num_blocks": 1310720, 00:17:30.048 "uuid": "94aaac3f-8619-49bc-aee3-e3479d1f9e5b", 00:17:30.048 "numa_id": -1, 00:17:30.048 "assigned_rate_limits": { 00:17:30.048 "rw_ios_per_sec": 0, 00:17:30.048 "rw_mbytes_per_sec": 0, 00:17:30.048 "r_mbytes_per_sec": 0, 00:17:30.048 "w_mbytes_per_sec": 0 00:17:30.048 }, 00:17:30.048 "claimed": false, 00:17:30.048 "zoned": false, 00:17:30.048 "supported_io_types": { 00:17:30.048 "read": true, 00:17:30.048 "write": true, 00:17:30.048 "unmap": true, 00:17:30.048 "flush": true, 00:17:30.048 "reset": true, 00:17:30.048 "nvme_admin": true, 00:17:30.048 "nvme_io": true, 00:17:30.048 "nvme_io_md": false, 00:17:30.048 "write_zeroes": true, 00:17:30.048 "zcopy": false, 00:17:30.048 "get_zone_info": false, 00:17:30.048 "zone_management": false, 00:17:30.048 "zone_append": false, 00:17:30.048 "compare": true, 00:17:30.048 "compare_and_write": false, 00:17:30.048 "abort": true, 00:17:30.048 "seek_hole": false, 00:17:30.048 "seek_data": false, 00:17:30.048 "copy": true, 00:17:30.048 "nvme_iov_md": false 00:17:30.048 }, 00:17:30.048 "driver_specific": { 00:17:30.048 "nvme": [ 00:17:30.048 { 00:17:30.048 "pci_address": "0000:00:11.0", 00:17:30.048 "trid": { 00:17:30.048 "trtype": "PCIe", 00:17:30.048 "traddr": "0000:00:11.0" 00:17:30.048 }, 00:17:30.048 "ctrlr_data": { 00:17:30.048 "cntlid": 0, 00:17:30.048 "vendor_id": "0x1b36", 00:17:30.048 "model_number": "QEMU NVMe Ctrl", 00:17:30.048 "serial_number": "12341", 00:17:30.048 "firmware_revision": "8.0.0", 00:17:30.048 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:30.048 "oacs": { 00:17:30.048 "security": 0, 00:17:30.048 "format": 1, 00:17:30.048 "firmware": 0, 00:17:30.048 "ns_manage": 1 00:17:30.048 }, 00:17:30.048 "multi_ctrlr": false, 00:17:30.048 "ana_reporting": false 00:17:30.048 }, 00:17:30.048 "vs": { 00:17:30.048 "nvme_version": "1.4" 00:17:30.048 }, 00:17:30.048 "ns_data": { 00:17:30.048 "id": 1, 00:17:30.048 "can_share": false 00:17:30.048 } 00:17:30.048 } 00:17:30.048 ], 00:17:30.048 "mp_policy": "active_passive" 00:17:30.048 } 00:17:30.048 } 00:17:30.048 ]' 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=ff817681-8bb9-4599-8a4d-3c5ee1022593 00:17:30.048 01:17:02 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ff817681-8bb9-4599-8a4d-3c5ee1022593 00:17:30.048 01:17:03 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=8fa66856-d03b-4255-8530-c51023ab8f86 00:17:30.048 01:17:03 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8fa66856-d03b-4255-8530-c51023ab8f86 00:17:30.048 01:17:03 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:30.048 01:17:03 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:30.048 01:17:03 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=8fa66856-d03b-4255-8530-c51023ab8f86 00:17:30.048 01:17:03 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:30.048 01:17:03 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 8fa66856-d03b-4255-8530-c51023ab8f86 00:17:30.048 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=8fa66856-d03b-4255-8530-c51023ab8f86 00:17:30.048 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:30.048 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:30.048 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:30.048 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8fa66856-d03b-4255-8530-c51023ab8f86 00:17:30.048 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:30.048 { 00:17:30.048 "name": "8fa66856-d03b-4255-8530-c51023ab8f86", 00:17:30.048 "aliases": [ 00:17:30.048 "lvs/nvme0n1p0" 00:17:30.048 ], 00:17:30.048 "product_name": "Logical Volume", 00:17:30.048 "block_size": 4096, 00:17:30.048 "num_blocks": 26476544, 00:17:30.048 "uuid": "8fa66856-d03b-4255-8530-c51023ab8f86", 00:17:30.048 "assigned_rate_limits": { 00:17:30.048 "rw_ios_per_sec": 0, 00:17:30.048 "rw_mbytes_per_sec": 0, 00:17:30.048 "r_mbytes_per_sec": 0, 00:17:30.048 "w_mbytes_per_sec": 0 00:17:30.048 }, 00:17:30.048 "claimed": false, 00:17:30.048 "zoned": false, 00:17:30.048 "supported_io_types": { 00:17:30.048 "read": true, 00:17:30.048 "write": true, 00:17:30.048 "unmap": true, 00:17:30.048 "flush": false, 00:17:30.048 "reset": true, 00:17:30.048 "nvme_admin": false, 00:17:30.048 "nvme_io": false, 00:17:30.048 "nvme_io_md": false, 00:17:30.048 "write_zeroes": true, 00:17:30.048 "zcopy": false, 00:17:30.048 "get_zone_info": false, 00:17:30.048 "zone_management": false, 00:17:30.048 "zone_append": false, 00:17:30.048 "compare": false, 00:17:30.048 "compare_and_write": false, 00:17:30.048 "abort": false, 00:17:30.048 "seek_hole": true, 00:17:30.048 "seek_data": true, 00:17:30.048 "copy": false, 00:17:30.048 "nvme_iov_md": false 00:17:30.048 }, 00:17:30.048 "driver_specific": { 00:17:30.048 "lvol": { 00:17:30.048 "lvol_store_uuid": "ff817681-8bb9-4599-8a4d-3c5ee1022593", 00:17:30.048 "base_bdev": "nvme0n1", 00:17:30.048 "thin_provision": true, 00:17:30.048 "num_allocated_clusters": 0, 00:17:30.048 "snapshot": false, 00:17:30.048 "clone": false, 00:17:30.048 "esnap_clone": false 00:17:30.048 } 00:17:30.048 } 00:17:30.048 } 00:17:30.048 ]' 00:17:30.048 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:30.048 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:30.048 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:30.048 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:30.048 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:30.048 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:30.048 01:17:03 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:30.049 01:17:03 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:30.049 01:17:03 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:30.049 01:17:03 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:30.049 01:17:03 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:30.049 01:17:03 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 8fa66856-d03b-4255-8530-c51023ab8f86 00:17:30.049 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=8fa66856-d03b-4255-8530-c51023ab8f86 00:17:30.049 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:30.049 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:30.049 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:30.049 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8fa66856-d03b-4255-8530-c51023ab8f86 00:17:30.310 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:30.310 { 00:17:30.310 "name": "8fa66856-d03b-4255-8530-c51023ab8f86", 00:17:30.310 "aliases": [ 00:17:30.310 "lvs/nvme0n1p0" 00:17:30.310 ], 00:17:30.310 "product_name": "Logical Volume", 00:17:30.310 "block_size": 4096, 00:17:30.310 "num_blocks": 26476544, 00:17:30.310 "uuid": "8fa66856-d03b-4255-8530-c51023ab8f86", 00:17:30.310 "assigned_rate_limits": { 00:17:30.310 "rw_ios_per_sec": 0, 00:17:30.310 "rw_mbytes_per_sec": 0, 00:17:30.310 "r_mbytes_per_sec": 0, 00:17:30.310 "w_mbytes_per_sec": 0 00:17:30.310 }, 00:17:30.310 "claimed": false, 00:17:30.310 "zoned": false, 00:17:30.310 "supported_io_types": { 00:17:30.310 "read": true, 00:17:30.310 "write": true, 00:17:30.310 "unmap": true, 00:17:30.310 "flush": false, 00:17:30.310 "reset": true, 00:17:30.310 "nvme_admin": false, 00:17:30.310 "nvme_io": false, 00:17:30.310 "nvme_io_md": false, 00:17:30.310 "write_zeroes": true, 00:17:30.310 "zcopy": false, 00:17:30.310 "get_zone_info": false, 00:17:30.310 "zone_management": false, 00:17:30.310 "zone_append": false, 00:17:30.310 "compare": false, 00:17:30.310 "compare_and_write": false, 00:17:30.310 "abort": false, 00:17:30.310 "seek_hole": true, 00:17:30.310 "seek_data": true, 00:17:30.310 "copy": false, 00:17:30.310 "nvme_iov_md": false 00:17:30.310 }, 00:17:30.310 "driver_specific": { 00:17:30.310 "lvol": { 00:17:30.310 "lvol_store_uuid": "ff817681-8bb9-4599-8a4d-3c5ee1022593", 00:17:30.310 "base_bdev": "nvme0n1", 00:17:30.310 "thin_provision": true, 00:17:30.310 "num_allocated_clusters": 0, 00:17:30.310 "snapshot": false, 00:17:30.310 "clone": false, 00:17:30.310 "esnap_clone": false 00:17:30.310 } 00:17:30.310 } 00:17:30.310 } 00:17:30.310 ]' 00:17:30.310 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:30.310 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:30.310 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:30.310 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:30.310 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:30.310 01:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:30.310 01:17:03 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:30.310 01:17:03 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:30.571 01:17:04 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:30.571 01:17:04 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:30.571 01:17:04 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:30.571 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:30.571 01:17:04 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 8fa66856-d03b-4255-8530-c51023ab8f86 00:17:30.571 01:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=8fa66856-d03b-4255-8530-c51023ab8f86 00:17:30.571 01:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:30.571 01:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:30.571 01:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:30.571 01:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8fa66856-d03b-4255-8530-c51023ab8f86 00:17:30.831 01:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:30.831 { 00:17:30.831 "name": "8fa66856-d03b-4255-8530-c51023ab8f86", 00:17:30.831 "aliases": [ 00:17:30.831 "lvs/nvme0n1p0" 00:17:30.831 ], 00:17:30.831 "product_name": "Logical Volume", 00:17:30.831 "block_size": 4096, 00:17:30.831 "num_blocks": 26476544, 00:17:30.831 "uuid": "8fa66856-d03b-4255-8530-c51023ab8f86", 00:17:30.831 "assigned_rate_limits": { 00:17:30.831 "rw_ios_per_sec": 0, 00:17:30.831 "rw_mbytes_per_sec": 0, 00:17:30.831 "r_mbytes_per_sec": 0, 00:17:30.831 "w_mbytes_per_sec": 0 00:17:30.831 }, 00:17:30.831 "claimed": false, 00:17:30.831 "zoned": false, 00:17:30.831 "supported_io_types": { 00:17:30.831 "read": true, 00:17:30.831 "write": true, 00:17:30.831 "unmap": true, 00:17:30.831 "flush": false, 00:17:30.831 "reset": true, 00:17:30.831 "nvme_admin": false, 00:17:30.831 "nvme_io": false, 00:17:30.831 "nvme_io_md": false, 00:17:30.831 "write_zeroes": true, 00:17:30.831 "zcopy": false, 00:17:30.831 "get_zone_info": false, 00:17:30.831 "zone_management": false, 00:17:30.831 "zone_append": false, 00:17:30.831 "compare": false, 00:17:30.831 "compare_and_write": false, 00:17:30.831 "abort": false, 00:17:30.831 "seek_hole": true, 00:17:30.831 "seek_data": true, 00:17:30.831 "copy": false, 00:17:30.831 "nvme_iov_md": false 00:17:30.831 }, 00:17:30.831 "driver_specific": { 00:17:30.831 "lvol": { 00:17:30.831 "lvol_store_uuid": "ff817681-8bb9-4599-8a4d-3c5ee1022593", 00:17:30.831 "base_bdev": "nvme0n1", 00:17:30.831 "thin_provision": true, 00:17:30.831 "num_allocated_clusters": 0, 00:17:30.831 "snapshot": false, 00:17:30.831 "clone": false, 00:17:30.831 "esnap_clone": false 00:17:30.831 } 00:17:30.831 } 00:17:30.831 } 00:17:30.831 ]' 00:17:30.831 01:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:30.831 01:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:30.831 01:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:30.831 01:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:30.831 01:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:30.831 01:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:30.831 01:17:04 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:30.831 01:17:04 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:30.831 01:17:04 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8fa66856-d03b-4255-8530-c51023ab8f86 -c nvc0n1p0 --l2p_dram_limit 60 00:17:31.093 [2024-12-14 01:17:04.460951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.093 [2024-12-14 01:17:04.460986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:31.093 [2024-12-14 01:17:04.460997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:31.093 [2024-12-14 01:17:04.461006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.093 [2024-12-14 01:17:04.461067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.093 [2024-12-14 01:17:04.461077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:31.093 [2024-12-14 01:17:04.461083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:31.093 [2024-12-14 01:17:04.461100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.093 [2024-12-14 01:17:04.461121] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:31.093 [2024-12-14 01:17:04.461361] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:31.093 [2024-12-14 01:17:04.461380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.093 [2024-12-14 01:17:04.461389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:31.093 [2024-12-14 01:17:04.461395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:17:31.093 [2024-12-14 01:17:04.461402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.093 [2024-12-14 01:17:04.461437] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0db43174-440a-44a3-a24c-89a8e4a039a2 00:17:31.093 [2024-12-14 01:17:04.462442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.093 [2024-12-14 01:17:04.462458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:31.093 [2024-12-14 01:17:04.462467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:31.093 [2024-12-14 01:17:04.462473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.093 [2024-12-14 01:17:04.467647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.093 [2024-12-14 01:17:04.467669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:31.093 [2024-12-14 01:17:04.467678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.102 ms 00:17:31.093 [2024-12-14 01:17:04.467686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.093 [2024-12-14 01:17:04.467808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.093 [2024-12-14 01:17:04.467816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:31.093 [2024-12-14 01:17:04.467824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:31.093 [2024-12-14 01:17:04.467830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.093 [2024-12-14 01:17:04.467874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.093 [2024-12-14 01:17:04.467884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:31.093 [2024-12-14 01:17:04.467892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:31.093 [2024-12-14 01:17:04.467897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.093 [2024-12-14 01:17:04.467925] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:31.093 [2024-12-14 01:17:04.469220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.093 [2024-12-14 01:17:04.469244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:31.093 [2024-12-14 01:17:04.469251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.301 ms 00:17:31.093 [2024-12-14 01:17:04.469259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.093 [2024-12-14 01:17:04.469291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.093 [2024-12-14 01:17:04.469299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:31.093 [2024-12-14 01:17:04.469305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:31.093 [2024-12-14 01:17:04.469314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.093 [2024-12-14 01:17:04.469352] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:31.093 [2024-12-14 01:17:04.469461] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:31.093 [2024-12-14 01:17:04.469470] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:31.093 [2024-12-14 01:17:04.469479] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:31.093 [2024-12-14 01:17:04.469487] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:31.093 [2024-12-14 01:17:04.469498] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:31.093 [2024-12-14 01:17:04.469504] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:31.093 [2024-12-14 01:17:04.469512] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:31.093 [2024-12-14 01:17:04.469517] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:31.093 [2024-12-14 01:17:04.469523] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:31.093 [2024-12-14 01:17:04.469529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.093 [2024-12-14 01:17:04.469547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:31.093 [2024-12-14 01:17:04.469561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:17:31.093 [2024-12-14 01:17:04.469568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.093 [2024-12-14 01:17:04.469655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.093 [2024-12-14 01:17:04.469665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:31.093 [2024-12-14 01:17:04.469673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:31.093 [2024-12-14 01:17:04.469680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.093 [2024-12-14 01:17:04.469767] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:31.093 [2024-12-14 01:17:04.469786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:31.093 [2024-12-14 01:17:04.469792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:31.093 [2024-12-14 01:17:04.469799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.093 [2024-12-14 01:17:04.469805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:31.093 [2024-12-14 01:17:04.469812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:31.093 [2024-12-14 01:17:04.469817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:31.093 [2024-12-14 01:17:04.469824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:31.093 [2024-12-14 01:17:04.469830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:31.093 [2024-12-14 01:17:04.469836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:31.093 [2024-12-14 01:17:04.469841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:31.093 [2024-12-14 01:17:04.469848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:31.093 [2024-12-14 01:17:04.469853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:31.093 [2024-12-14 01:17:04.469861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:31.093 [2024-12-14 01:17:04.469871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:31.093 [2024-12-14 01:17:04.469879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.093 [2024-12-14 01:17:04.469885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:31.094 [2024-12-14 01:17:04.469892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:31.094 [2024-12-14 01:17:04.469898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.094 [2024-12-14 01:17:04.469906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:31.094 [2024-12-14 01:17:04.469911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:31.094 [2024-12-14 01:17:04.469918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.094 [2024-12-14 01:17:04.469924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:31.094 [2024-12-14 01:17:04.469931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:31.094 [2024-12-14 01:17:04.469937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.094 [2024-12-14 01:17:04.469944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:31.094 [2024-12-14 01:17:04.469950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:31.094 [2024-12-14 01:17:04.469957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.094 [2024-12-14 01:17:04.469963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:31.094 [2024-12-14 01:17:04.469972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:31.094 [2024-12-14 01:17:04.469978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.094 [2024-12-14 01:17:04.469986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:31.094 [2024-12-14 01:17:04.469992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:31.094 [2024-12-14 01:17:04.469999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:31.094 [2024-12-14 01:17:04.470004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:31.094 [2024-12-14 01:17:04.470011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:31.094 [2024-12-14 01:17:04.470017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:31.094 [2024-12-14 01:17:04.470024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:31.094 [2024-12-14 01:17:04.470029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:31.094 [2024-12-14 01:17:04.470036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.094 [2024-12-14 01:17:04.470042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:31.094 [2024-12-14 01:17:04.470050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:31.094 [2024-12-14 01:17:04.470055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.094 [2024-12-14 01:17:04.470063] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:31.094 [2024-12-14 01:17:04.470069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:31.094 [2024-12-14 01:17:04.470078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:31.094 [2024-12-14 01:17:04.470087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.094 [2024-12-14 01:17:04.470096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:31.094 [2024-12-14 01:17:04.470102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:31.094 [2024-12-14 01:17:04.470109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:31.094 [2024-12-14 01:17:04.470115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:31.094 [2024-12-14 01:17:04.470123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:31.094 [2024-12-14 01:17:04.470128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:31.094 [2024-12-14 01:17:04.470136] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:31.094 [2024-12-14 01:17:04.470145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:31.094 [2024-12-14 01:17:04.470153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:31.094 [2024-12-14 01:17:04.470159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:31.094 [2024-12-14 01:17:04.470168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:31.094 [2024-12-14 01:17:04.470174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:31.094 [2024-12-14 01:17:04.470181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:31.094 [2024-12-14 01:17:04.470187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:31.094 [2024-12-14 01:17:04.470196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:31.094 [2024-12-14 01:17:04.470202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:31.094 [2024-12-14 01:17:04.470210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:31.094 [2024-12-14 01:17:04.470216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:31.094 [2024-12-14 01:17:04.470223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:31.094 [2024-12-14 01:17:04.470229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:31.094 [2024-12-14 01:17:04.470236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:31.094 [2024-12-14 01:17:04.470242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:31.094 [2024-12-14 01:17:04.470249] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:31.094 [2024-12-14 01:17:04.470256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:31.094 [2024-12-14 01:17:04.470263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:31.094 [2024-12-14 01:17:04.470269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:31.094 [2024-12-14 01:17:04.470276] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:31.094 [2024-12-14 01:17:04.470282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:31.094 [2024-12-14 01:17:04.470299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.094 [2024-12-14 01:17:04.470304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:31.094 [2024-12-14 01:17:04.470313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:17:31.094 [2024-12-14 01:17:04.470320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.094 [2024-12-14 01:17:04.470380] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:31.094 [2024-12-14 01:17:04.470389] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:33.000 [2024-12-14 01:17:06.319345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.000 [2024-12-14 01:17:06.319391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:33.000 [2024-12-14 01:17:06.319404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1848.953 ms 00:17:33.000 [2024-12-14 01:17:06.319410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.000 [2024-12-14 01:17:06.327401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.000 [2024-12-14 01:17:06.327431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:33.000 [2024-12-14 01:17:06.327444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.904 ms 00:17:33.000 [2024-12-14 01:17:06.327450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.000 [2024-12-14 01:17:06.327526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.000 [2024-12-14 01:17:06.327533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:33.000 [2024-12-14 01:17:06.327552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:33.000 [2024-12-14 01:17:06.327557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.000 [2024-12-14 01:17:06.345153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.000 [2024-12-14 01:17:06.345185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:33.000 [2024-12-14 01:17:06.345207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.546 ms 00:17:33.000 [2024-12-14 01:17:06.345222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.000 [2024-12-14 01:17:06.345266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.000 [2024-12-14 01:17:06.345273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:33.000 [2024-12-14 01:17:06.345281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:33.000 [2024-12-14 01:17:06.345287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.000 [2024-12-14 01:17:06.345670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.000 [2024-12-14 01:17:06.345690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:33.000 [2024-12-14 01:17:06.345699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:17:33.000 [2024-12-14 01:17:06.345706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.000 [2024-12-14 01:17:06.345811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.000 [2024-12-14 01:17:06.345821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:33.000 [2024-12-14 01:17:06.345840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:33.000 [2024-12-14 01:17:06.345847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.000 [2024-12-14 01:17:06.352557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.000 [2024-12-14 01:17:06.352593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:33.000 [2024-12-14 01:17:06.352609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.682 ms 00:17:33.000 [2024-12-14 01:17:06.352651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.000 [2024-12-14 01:17:06.361081] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:33.000 [2024-12-14 01:17:06.373588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.000 [2024-12-14 01:17:06.373613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:33.000 [2024-12-14 01:17:06.373630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.835 ms 00:17:33.000 [2024-12-14 01:17:06.373647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.000 [2024-12-14 01:17:06.407894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.000 [2024-12-14 01:17:06.407923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:33.000 [2024-12-14 01:17:06.407931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.214 ms 00:17:33.000 [2024-12-14 01:17:06.407951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.000 [2024-12-14 01:17:06.408100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.000 [2024-12-14 01:17:06.408111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:33.000 [2024-12-14 01:17:06.408119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:17:33.000 [2024-12-14 01:17:06.408126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.000 [2024-12-14 01:17:06.410487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.000 [2024-12-14 01:17:06.410515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:33.000 [2024-12-14 01:17:06.410524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.334 ms 00:17:33.000 [2024-12-14 01:17:06.410532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.000 [2024-12-14 01:17:06.412558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.000 [2024-12-14 01:17:06.412584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:33.000 [2024-12-14 01:17:06.412591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.991 ms 00:17:33.000 [2024-12-14 01:17:06.412598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.000 [2024-12-14 01:17:06.412856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.000 [2024-12-14 01:17:06.412869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:33.000 [2024-12-14 01:17:06.412877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:17:33.000 [2024-12-14 01:17:06.412886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.000 [2024-12-14 01:17:06.430800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.000 [2024-12-14 01:17:06.430834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:33.000 [2024-12-14 01:17:06.430843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.886 ms 00:17:33.000 [2024-12-14 01:17:06.430851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.000 [2024-12-14 01:17:06.433980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.000 [2024-12-14 01:17:06.434009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:33.000 [2024-12-14 01:17:06.434018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.073 ms 00:17:33.000 [2024-12-14 01:17:06.434026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.000 [2024-12-14 01:17:06.436380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.000 [2024-12-14 01:17:06.436405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:33.000 [2024-12-14 01:17:06.436411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.322 ms 00:17:33.000 [2024-12-14 01:17:06.436418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.000 [2024-12-14 01:17:06.438766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.000 [2024-12-14 01:17:06.438792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:33.000 [2024-12-14 01:17:06.438799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.312 ms 00:17:33.000 [2024-12-14 01:17:06.438807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.000 [2024-12-14 01:17:06.438847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.000 [2024-12-14 01:17:06.438864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:33.000 [2024-12-14 01:17:06.438872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:33.000 [2024-12-14 01:17:06.438879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.000 [2024-12-14 01:17:06.438942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.000 [2024-12-14 01:17:06.438950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:33.000 [2024-12-14 01:17:06.438958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:33.000 [2024-12-14 01:17:06.438965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.000 [2024-12-14 01:17:06.439785] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 1978.462 ms, result 0 00:17:33.000 { 00:17:33.000 "name": "ftl0", 00:17:33.000 "uuid": "0db43174-440a-44a3-a24c-89a8e4a039a2" 00:17:33.000 } 00:17:33.000 01:17:06 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:33.000 01:17:06 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:33.000 01:17:06 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:33.000 01:17:06 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:17:33.000 01:17:06 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:33.000 01:17:06 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:33.000 01:17:06 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:33.259 01:17:06 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:33.259 [ 00:17:33.259 { 00:17:33.259 "name": "ftl0", 00:17:33.259 "aliases": [ 00:17:33.259 "0db43174-440a-44a3-a24c-89a8e4a039a2" 00:17:33.259 ], 00:17:33.259 "product_name": "FTL disk", 00:17:33.259 "block_size": 4096, 00:17:33.259 "num_blocks": 20971520, 00:17:33.259 "uuid": "0db43174-440a-44a3-a24c-89a8e4a039a2", 00:17:33.259 "assigned_rate_limits": { 00:17:33.259 "rw_ios_per_sec": 0, 00:17:33.259 "rw_mbytes_per_sec": 0, 00:17:33.259 "r_mbytes_per_sec": 0, 00:17:33.259 "w_mbytes_per_sec": 0 00:17:33.259 }, 00:17:33.259 "claimed": false, 00:17:33.259 "zoned": false, 00:17:33.259 "supported_io_types": { 00:17:33.259 "read": true, 00:17:33.259 "write": true, 00:17:33.259 "unmap": true, 00:17:33.259 "flush": true, 00:17:33.259 "reset": false, 00:17:33.259 "nvme_admin": false, 00:17:33.259 "nvme_io": false, 00:17:33.259 "nvme_io_md": false, 00:17:33.259 "write_zeroes": true, 00:17:33.259 "zcopy": false, 00:17:33.259 "get_zone_info": false, 00:17:33.259 "zone_management": false, 00:17:33.259 "zone_append": false, 00:17:33.259 "compare": false, 00:17:33.259 "compare_and_write": false, 00:17:33.259 "abort": false, 00:17:33.259 "seek_hole": false, 00:17:33.259 "seek_data": false, 00:17:33.259 "copy": false, 00:17:33.259 "nvme_iov_md": false 00:17:33.259 }, 00:17:33.259 "driver_specific": { 00:17:33.259 "ftl": { 00:17:33.259 "base_bdev": "8fa66856-d03b-4255-8530-c51023ab8f86", 00:17:33.259 "cache": "nvc0n1p0" 00:17:33.259 } 00:17:33.259 } 00:17:33.259 } 00:17:33.259 ] 00:17:33.259 01:17:06 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:17:33.259 01:17:06 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:33.259 01:17:06 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:33.518 01:17:06 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:33.518 01:17:06 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:33.518 [2024-12-14 01:17:07.064649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.518 [2024-12-14 01:17:07.064682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:33.518 [2024-12-14 01:17:07.064695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:33.518 [2024-12-14 01:17:07.064702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.518 [2024-12-14 01:17:07.064736] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:33.518 [2024-12-14 01:17:07.065153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.518 [2024-12-14 01:17:07.065186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:33.518 [2024-12-14 01:17:07.065195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:17:33.518 [2024-12-14 01:17:07.065202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.518 [2024-12-14 01:17:07.065615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.518 [2024-12-14 01:17:07.065639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:33.518 [2024-12-14 01:17:07.065647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:17:33.518 [2024-12-14 01:17:07.065654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.518 [2024-12-14 01:17:07.068065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.518 [2024-12-14 01:17:07.068080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:33.518 [2024-12-14 01:17:07.068087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.390 ms 00:17:33.518 [2024-12-14 01:17:07.068097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.518 [2024-12-14 01:17:07.072722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.518 [2024-12-14 01:17:07.072742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:33.518 [2024-12-14 01:17:07.072750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.603 ms 00:17:33.518 [2024-12-14 01:17:07.072757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.518 [2024-12-14 01:17:07.074053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.518 [2024-12-14 01:17:07.074081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:33.518 [2024-12-14 01:17:07.074087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.222 ms 00:17:33.518 [2024-12-14 01:17:07.074094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.518 [2024-12-14 01:17:07.076686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.518 [2024-12-14 01:17:07.076714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:33.519 [2024-12-14 01:17:07.076724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.561 ms 00:17:33.519 [2024-12-14 01:17:07.076731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.519 [2024-12-14 01:17:07.076877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.519 [2024-12-14 01:17:07.076890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:33.519 [2024-12-14 01:17:07.076896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:17:33.519 [2024-12-14 01:17:07.076904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.519 [2024-12-14 01:17:07.078360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.519 [2024-12-14 01:17:07.078389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:33.519 [2024-12-14 01:17:07.078397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.434 ms 00:17:33.519 [2024-12-14 01:17:07.078404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.519 [2024-12-14 01:17:07.079359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.519 [2024-12-14 01:17:07.079384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:33.519 [2024-12-14 01:17:07.079392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:17:33.519 [2024-12-14 01:17:07.079398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.519 [2024-12-14 01:17:07.080160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.519 [2024-12-14 01:17:07.080184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:33.519 [2024-12-14 01:17:07.080191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:17:33.519 [2024-12-14 01:17:07.080200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.519 [2024-12-14 01:17:07.080957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.519 [2024-12-14 01:17:07.080981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:33.519 [2024-12-14 01:17:07.080988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:17:33.519 [2024-12-14 01:17:07.080995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.519 [2024-12-14 01:17:07.081069] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:33.519 [2024-12-14 01:17:07.081089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:33.519 [2024-12-14 01:17:07.081574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:33.520 [2024-12-14 01:17:07.081773] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:33.520 [2024-12-14 01:17:07.081778] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0db43174-440a-44a3-a24c-89a8e4a039a2 00:17:33.520 [2024-12-14 01:17:07.081788] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:33.520 [2024-12-14 01:17:07.081793] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:33.520 [2024-12-14 01:17:07.081800] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:33.520 [2024-12-14 01:17:07.081806] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:33.520 [2024-12-14 01:17:07.081813] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:33.520 [2024-12-14 01:17:07.081828] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:33.520 [2024-12-14 01:17:07.081835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:33.520 [2024-12-14 01:17:07.081840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:33.520 [2024-12-14 01:17:07.081846] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:33.520 [2024-12-14 01:17:07.081852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.520 [2024-12-14 01:17:07.081858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:33.520 [2024-12-14 01:17:07.081865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:17:33.520 [2024-12-14 01:17:07.081872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.520 [2024-12-14 01:17:07.083250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.520 [2024-12-14 01:17:07.083270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:33.520 [2024-12-14 01:17:07.083276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.345 ms 00:17:33.520 [2024-12-14 01:17:07.083284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.520 [2024-12-14 01:17:07.083355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.520 [2024-12-14 01:17:07.083362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:33.520 [2024-12-14 01:17:07.083369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:33.520 [2024-12-14 01:17:07.083377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.520 [2024-12-14 01:17:07.088139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.520 [2024-12-14 01:17:07.088166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:33.520 [2024-12-14 01:17:07.088173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.520 [2024-12-14 01:17:07.088180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.520 [2024-12-14 01:17:07.088229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.520 [2024-12-14 01:17:07.088237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:33.520 [2024-12-14 01:17:07.088243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.520 [2024-12-14 01:17:07.088261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.520 [2024-12-14 01:17:07.088325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.520 [2024-12-14 01:17:07.088336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:33.520 [2024-12-14 01:17:07.088342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.520 [2024-12-14 01:17:07.088349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.520 [2024-12-14 01:17:07.088376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.520 [2024-12-14 01:17:07.088383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:33.520 [2024-12-14 01:17:07.088389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.520 [2024-12-14 01:17:07.088395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.520 [2024-12-14 01:17:07.097031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.520 [2024-12-14 01:17:07.097068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:33.520 [2024-12-14 01:17:07.097077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.520 [2024-12-14 01:17:07.097085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.520 [2024-12-14 01:17:07.104066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.520 [2024-12-14 01:17:07.104103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:33.520 [2024-12-14 01:17:07.104121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.520 [2024-12-14 01:17:07.104129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.520 [2024-12-14 01:17:07.104177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.520 [2024-12-14 01:17:07.104187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:33.520 [2024-12-14 01:17:07.104194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.520 [2024-12-14 01:17:07.104201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.520 [2024-12-14 01:17:07.104258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.520 [2024-12-14 01:17:07.104267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:33.520 [2024-12-14 01:17:07.104273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.520 [2024-12-14 01:17:07.104280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.520 [2024-12-14 01:17:07.104348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.520 [2024-12-14 01:17:07.104357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:33.520 [2024-12-14 01:17:07.104363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.520 [2024-12-14 01:17:07.104369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.520 [2024-12-14 01:17:07.104407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.520 [2024-12-14 01:17:07.104415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:33.520 [2024-12-14 01:17:07.104421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.520 [2024-12-14 01:17:07.104428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.520 [2024-12-14 01:17:07.104467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.520 [2024-12-14 01:17:07.104478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:33.520 [2024-12-14 01:17:07.104484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.520 [2024-12-14 01:17:07.104491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.520 [2024-12-14 01:17:07.104534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.520 [2024-12-14 01:17:07.104543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:33.520 [2024-12-14 01:17:07.104549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.520 [2024-12-14 01:17:07.104556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.520 [2024-12-14 01:17:07.104715] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 40.055 ms, result 0 00:17:33.520 true 00:17:33.520 01:17:07 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 87808 00:17:33.520 01:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 87808 ']' 00:17:33.520 01:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 87808 00:17:33.520 01:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:17:33.520 01:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.520 01:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87808 00:17:33.780 01:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:33.780 01:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:33.780 killing process with pid 87808 00:17:33.780 01:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87808' 00:17:33.780 01:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 87808 00:17:33.780 01:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 87808 00:17:41.953 01:17:14 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:41.953 01:17:14 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:41.953 01:17:14 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:41.953 01:17:14 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:41.953 01:17:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:41.953 01:17:14 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:41.953 01:17:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:41.953 01:17:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:41.953 01:17:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:41.953 01:17:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:41.953 01:17:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:41.953 01:17:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:41.953 01:17:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:41.953 01:17:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:41.953 01:17:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:41.953 01:17:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:41.953 01:17:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:41.953 01:17:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:41.953 01:17:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:41.953 01:17:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:41.953 01:17:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:41.953 01:17:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:41.953 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:41.953 fio-3.35 00:17:41.953 Starting 1 thread 00:17:45.254 00:17:45.254 test: (groupid=0, jobs=1): err= 0: pid=87966: Sat Dec 14 01:17:18 2024 00:17:45.254 read: IOPS=1058, BW=70.3MiB/s (73.7MB/s)(255MiB/3621msec) 00:17:45.254 slat (nsec): min=3157, max=31093, avg=5498.23, stdev=2653.71 00:17:45.254 clat (usec): min=239, max=1361, avg=426.47, stdev=171.90 00:17:45.254 lat (usec): min=243, max=1369, avg=431.97, stdev=172.29 00:17:45.254 clat percentiles (usec): 00:17:45.254 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 293], 00:17:45.254 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 338], 60.00th=[ 420], 00:17:45.254 | 70.00th=[ 502], 80.00th=[ 537], 90.00th=[ 619], 95.00th=[ 816], 00:17:45.254 | 99.00th=[ 988], 99.50th=[ 1090], 99.90th=[ 1254], 99.95th=[ 1336], 00:17:45.254 | 99.99th=[ 1369] 00:17:45.254 write: IOPS=1065, BW=70.8MiB/s (74.2MB/s)(256MiB/3618msec); 0 zone resets 00:17:45.254 slat (usec): min=14, max=234, avg=22.90, stdev= 7.67 00:17:45.254 clat (usec): min=260, max=1908, avg=472.69, stdev=205.57 00:17:45.254 lat (usec): min=280, max=1960, avg=495.58, stdev=206.08 00:17:45.254 clat percentiles (usec): 00:17:45.254 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 310], 00:17:45.254 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 371], 60.00th=[ 457], 00:17:45.254 | 70.00th=[ 553], 80.00th=[ 619], 90.00th=[ 791], 95.00th=[ 906], 00:17:45.254 | 99.00th=[ 1123], 99.50th=[ 1221], 99.90th=[ 1565], 99.95th=[ 1827], 00:17:45.254 | 99.99th=[ 1909] 00:17:45.254 bw ( KiB/s): min=48008, max=98192, per=99.19%, avg=71885.71, stdev=20062.94, samples=7 00:17:45.254 iops : min= 706, max= 1444, avg=1057.14, stdev=295.04, samples=7 00:17:45.254 lat (usec) : 250=0.03%, 500=66.54%, 750=24.28%, 1000=7.69% 00:17:45.254 lat (msec) : 2=1.47% 00:17:45.254 cpu : usr=99.03%, sys=0.17%, ctx=6, majf=0, minf=1324 00:17:45.254 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:45.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:45.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:45.254 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:45.254 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:45.254 00:17:45.254 Run status group 0 (all jobs): 00:17:45.254 READ: bw=70.3MiB/s (73.7MB/s), 70.3MiB/s-70.3MiB/s (73.7MB/s-73.7MB/s), io=255MiB (267MB), run=3621-3621msec 00:17:45.254 WRITE: bw=70.8MiB/s (74.2MB/s), 70.8MiB/s-70.8MiB/s (74.2MB/s-74.2MB/s), io=256MiB (269MB), run=3618-3618msec 00:17:45.827 ----------------------------------------------------- 00:17:45.827 Suppressions used: 00:17:45.827 count bytes template 00:17:45.827 1 5 /usr/src/fio/parse.c 00:17:45.827 1 8 libtcmalloc_minimal.so 00:17:45.827 1 904 libcrypto.so 00:17:45.827 ----------------------------------------------------- 00:17:45.827 00:17:45.827 01:17:19 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:45.827 01:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:45.827 01:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:45.827 01:17:19 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:45.827 01:17:19 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:46.088 01:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:46.088 01:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:46.088 01:17:19 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:46.088 01:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:46.088 01:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:46.088 01:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:46.088 01:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:46.088 01:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:46.088 01:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:46.088 01:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:46.088 01:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:46.088 01:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:46.088 01:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:46.088 01:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:46.088 01:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:46.088 01:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:46.088 01:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:46.088 01:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:46.088 01:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:46.088 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:46.088 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:46.089 fio-3.35 00:17:46.089 Starting 2 threads 00:18:12.650 00:18:12.651 first_half: (groupid=0, jobs=1): err= 0: pid=88058: Sat Dec 14 01:17:42 2024 00:18:12.651 read: IOPS=2959, BW=11.6MiB/s (12.1MB/s)(256MiB/22121msec) 00:18:12.651 slat (nsec): min=3196, max=96292, avg=4755.94, stdev=1391.96 00:18:12.651 clat (usec): min=548, max=363214, avg=36520.67, stdev=24680.53 00:18:12.651 lat (usec): min=552, max=363219, avg=36525.42, stdev=24680.66 00:18:12.651 clat percentiles (msec): 00:18:12.651 | 1.00th=[ 8], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 31], 00:18:12.651 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:18:12.651 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 39], 95.00th=[ 69], 00:18:12.651 | 99.00th=[ 155], 99.50th=[ 167], 99.90th=[ 279], 99.95th=[ 326], 00:18:12.651 | 99.99th=[ 355] 00:18:12.651 write: IOPS=2966, BW=11.6MiB/s (12.2MB/s)(256MiB/22090msec); 0 zone resets 00:18:12.651 slat (usec): min=3, max=759, avg= 6.00, stdev= 4.03 00:18:12.651 clat (usec): min=299, max=48103, avg=6694.52, stdev=6886.14 00:18:12.651 lat (usec): min=305, max=48108, avg=6700.52, stdev=6886.35 00:18:12.651 clat percentiles (usec): 00:18:12.651 | 1.00th=[ 742], 5.00th=[ 865], 10.00th=[ 1037], 20.00th=[ 2376], 00:18:12.651 | 30.00th=[ 3163], 40.00th=[ 3818], 50.00th=[ 4752], 60.00th=[ 5473], 00:18:12.651 | 70.00th=[ 6063], 80.00th=[10159], 90.00th=[13698], 95.00th=[23725], 00:18:12.651 | 99.00th=[32375], 99.50th=[33817], 99.90th=[44303], 99.95th=[45351], 00:18:12.651 | 99.99th=[47449] 00:18:12.651 bw ( KiB/s): min= 104, max=50296, per=99.73%, avg=23671.27, stdev=14337.74, samples=22 00:18:12.651 iops : min= 26, max=12574, avg=5917.82, stdev=3584.43, samples=22 00:18:12.651 lat (usec) : 500=0.03%, 750=0.57%, 1000=4.02% 00:18:12.651 lat (msec) : 2=4.12%, 4=12.42%, 10=20.17%, 20=7.30%, 50=48.16% 00:18:12.651 lat (msec) : 100=1.48%, 250=1.63%, 500=0.10% 00:18:12.651 cpu : usr=99.23%, sys=0.11%, ctx=40, majf=0, minf=5597 00:18:12.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:12.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.651 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:12.651 issued rwts: total=65468,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:12.651 second_half: (groupid=0, jobs=1): err= 0: pid=88059: Sat Dec 14 01:17:42 2024 00:18:12.651 read: IOPS=2984, BW=11.7MiB/s (12.2MB/s)(256MiB/21943msec) 00:18:12.651 slat (nsec): min=3132, max=23056, avg=4999.77, stdev=1088.08 00:18:12.651 clat (msec): min=10, max=232, avg=36.62, stdev=19.73 00:18:12.651 lat (msec): min=10, max=232, avg=36.62, stdev=19.73 00:18:12.651 clat percentiles (msec): 00:18:12.651 | 1.00th=[ 27], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 31], 00:18:12.651 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:18:12.651 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 66], 00:18:12.651 | 99.00th=[ 142], 99.50th=[ 155], 99.90th=[ 180], 99.95th=[ 205], 00:18:12.651 | 99.99th=[ 228] 00:18:12.651 write: IOPS=3004, BW=11.7MiB/s (12.3MB/s)(256MiB/21816msec); 0 zone resets 00:18:12.651 slat (usec): min=3, max=2084, avg= 6.18, stdev=11.62 00:18:12.651 clat (usec): min=356, max=36801, avg=6247.12, stdev=4629.02 00:18:12.651 lat (usec): min=363, max=36806, avg=6253.30, stdev=4629.20 00:18:12.651 clat percentiles (usec): 00:18:12.651 | 1.00th=[ 791], 5.00th=[ 1516], 10.00th=[ 2376], 20.00th=[ 3064], 00:18:12.651 | 30.00th=[ 3720], 40.00th=[ 4490], 50.00th=[ 5080], 60.00th=[ 5604], 00:18:12.651 | 70.00th=[ 5997], 80.00th=[ 8029], 90.00th=[12911], 95.00th=[15139], 00:18:12.651 | 99.00th=[22938], 99.50th=[30016], 99.90th=[33817], 99.95th=[34866], 00:18:12.651 | 99.99th=[36439] 00:18:12.651 bw ( KiB/s): min= 744, max=46264, per=100.00%, avg=24789.67, stdev=14884.76, samples=21 00:18:12.651 iops : min= 186, max=11566, avg=6197.38, stdev=3721.15, samples=21 00:18:12.651 lat (usec) : 500=0.03%, 750=0.29%, 1000=0.83% 00:18:12.651 lat (msec) : 2=2.42%, 4=13.20%, 10=24.12%, 20=8.46%, 50=47.34% 00:18:12.651 lat (msec) : 100=1.76%, 250=1.55% 00:18:12.651 cpu : usr=99.30%, sys=0.14%, ctx=38, majf=0, minf=5535 00:18:12.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:12.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.651 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:12.651 issued rwts: total=65488,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:12.651 00:18:12.651 Run status group 0 (all jobs): 00:18:12.651 READ: bw=23.1MiB/s (24.2MB/s), 11.6MiB/s-11.7MiB/s (12.1MB/s-12.2MB/s), io=512MiB (536MB), run=21943-22121msec 00:18:12.651 WRITE: bw=23.2MiB/s (24.3MB/s), 11.6MiB/s-11.7MiB/s (12.2MB/s-12.3MB/s), io=512MiB (537MB), run=21816-22090msec 00:18:12.651 ----------------------------------------------------- 00:18:12.651 Suppressions used: 00:18:12.651 count bytes template 00:18:12.651 2 10 /usr/src/fio/parse.c 00:18:12.651 3 288 /usr/src/fio/iolog.c 00:18:12.651 1 8 libtcmalloc_minimal.so 00:18:12.651 1 904 libcrypto.so 00:18:12.651 ----------------------------------------------------- 00:18:12.651 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:12.651 01:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:12.651 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:12.651 fio-3.35 00:18:12.651 Starting 1 thread 00:18:27.544 00:18:27.544 test: (groupid=0, jobs=1): err= 0: pid=88344: Sat Dec 14 01:17:59 2024 00:18:27.544 read: IOPS=7959, BW=31.1MiB/s (32.6MB/s)(255MiB/8192msec) 00:18:27.544 slat (nsec): min=3156, max=21188, avg=4840.25, stdev=1119.69 00:18:27.544 clat (usec): min=563, max=33886, avg=16072.03, stdev=2357.78 00:18:27.544 lat (usec): min=567, max=33890, avg=16076.87, stdev=2357.80 00:18:27.544 clat percentiles (usec): 00:18:27.544 | 1.00th=[13566], 5.00th=[13829], 10.00th=[13960], 20.00th=[14353], 00:18:27.544 | 30.00th=[15008], 40.00th=[15533], 50.00th=[15664], 60.00th=[15926], 00:18:27.544 | 70.00th=[16188], 80.00th=[16581], 90.00th=[18482], 95.00th=[21103], 00:18:27.544 | 99.00th=[25035], 99.50th=[26346], 99.90th=[30540], 99.95th=[31851], 00:18:27.544 | 99.99th=[33424] 00:18:27.544 write: IOPS=10.5k, BW=41.0MiB/s (43.0MB/s)(256MiB/6241msec); 0 zone resets 00:18:27.544 slat (usec): min=4, max=349, avg= 8.13, stdev= 4.44 00:18:27.544 clat (usec): min=508, max=70700, avg=12125.04, stdev=12977.53 00:18:27.544 lat (usec): min=515, max=70708, avg=12133.17, stdev=12977.73 00:18:27.544 clat percentiles (usec): 00:18:27.544 | 1.00th=[ 652], 5.00th=[ 717], 10.00th=[ 766], 20.00th=[ 1037], 00:18:27.544 | 30.00th=[ 1434], 40.00th=[ 2868], 50.00th=[10290], 60.00th=[12518], 00:18:27.544 | 70.00th=[14746], 80.00th=[18220], 90.00th=[30278], 95.00th=[36963], 00:18:27.544 | 99.00th=[56886], 99.50th=[59507], 99.90th=[65274], 99.95th=[66323], 00:18:27.544 | 99.99th=[67634] 00:18:27.544 bw ( KiB/s): min=25424, max=57928, per=96.01%, avg=40329.85, stdev=10397.04, samples=13 00:18:27.544 iops : min= 6356, max=14482, avg=10082.46, stdev=2599.26, samples=13 00:18:27.544 lat (usec) : 750=4.08%, 1000=5.62% 00:18:27.544 lat (msec) : 2=8.24%, 4=2.83%, 10=3.66%, 20=63.24%, 50=11.16% 00:18:27.544 lat (msec) : 100=1.17% 00:18:27.544 cpu : usr=98.97%, sys=0.21%, ctx=55, majf=0, minf=5575 00:18:27.544 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:27.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.544 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:27.544 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.544 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:27.544 00:18:27.544 Run status group 0 (all jobs): 00:18:27.544 READ: bw=31.1MiB/s (32.6MB/s), 31.1MiB/s-31.1MiB/s (32.6MB/s-32.6MB/s), io=255MiB (267MB), run=8192-8192msec 00:18:27.544 WRITE: bw=41.0MiB/s (43.0MB/s), 41.0MiB/s-41.0MiB/s (43.0MB/s-43.0MB/s), io=256MiB (268MB), run=6241-6241msec 00:18:27.544 ----------------------------------------------------- 00:18:27.544 Suppressions used: 00:18:27.544 count bytes template 00:18:27.544 1 5 /usr/src/fio/parse.c 00:18:27.544 2 192 /usr/src/fio/iolog.c 00:18:27.544 1 8 libtcmalloc_minimal.so 00:18:27.544 1 904 libcrypto.so 00:18:27.544 ----------------------------------------------------- 00:18:27.544 00:18:27.544 01:18:00 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:27.544 01:18:00 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:27.544 01:18:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:27.544 01:18:00 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:27.544 Remove shared memory files 00:18:27.544 01:18:00 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:27.544 01:18:00 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:27.544 01:18:00 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:27.544 01:18:00 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:27.544 01:18:00 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid70830 /dev/shm/spdk_tgt_trace.pid86752 00:18:27.544 01:18:00 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:27.544 01:18:00 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:27.544 ************************************ 00:18:27.544 END TEST ftl_fio_basic 00:18:27.544 ************************************ 00:18:27.544 00:18:27.544 real 0m59.805s 00:18:27.544 user 2m7.517s 00:18:27.544 sys 0m2.813s 00:18:27.544 01:18:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:27.544 01:18:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:27.544 01:18:00 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:27.544 01:18:00 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:27.544 01:18:00 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:27.544 01:18:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:27.544 ************************************ 00:18:27.544 START TEST ftl_bdevperf 00:18:27.544 ************************************ 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:27.544 * Looking for test storage... 00:18:27.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:27.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.544 --rc genhtml_branch_coverage=1 00:18:27.544 --rc genhtml_function_coverage=1 00:18:27.544 --rc genhtml_legend=1 00:18:27.544 --rc geninfo_all_blocks=1 00:18:27.544 --rc geninfo_unexecuted_blocks=1 00:18:27.544 00:18:27.544 ' 00:18:27.544 01:18:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:27.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.545 --rc genhtml_branch_coverage=1 00:18:27.545 --rc genhtml_function_coverage=1 00:18:27.545 --rc genhtml_legend=1 00:18:27.545 --rc geninfo_all_blocks=1 00:18:27.545 --rc geninfo_unexecuted_blocks=1 00:18:27.545 00:18:27.545 ' 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:27.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.545 --rc genhtml_branch_coverage=1 00:18:27.545 --rc genhtml_function_coverage=1 00:18:27.545 --rc genhtml_legend=1 00:18:27.545 --rc geninfo_all_blocks=1 00:18:27.545 --rc geninfo_unexecuted_blocks=1 00:18:27.545 00:18:27.545 ' 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:27.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.545 --rc genhtml_branch_coverage=1 00:18:27.545 --rc genhtml_function_coverage=1 00:18:27.545 --rc genhtml_legend=1 00:18:27.545 --rc geninfo_all_blocks=1 00:18:27.545 --rc geninfo_unexecuted_blocks=1 00:18:27.545 00:18:27.545 ' 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=88582 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 88582 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 88582 ']' 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.545 01:18:00 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:27.545 [2024-12-14 01:18:00.756081] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:27.545 [2024-12-14 01:18:00.756375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88582 ] 00:18:27.545 [2024-12-14 01:18:00.904120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.545 [2024-12-14 01:18:00.933478] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.112 01:18:01 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.112 01:18:01 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:18:28.112 01:18:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:28.112 01:18:01 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:28.112 01:18:01 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:28.112 01:18:01 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:28.112 01:18:01 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:28.112 01:18:01 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:28.370 01:18:01 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:28.370 01:18:01 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:28.370 01:18:01 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:28.370 01:18:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:28.370 01:18:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:28.370 01:18:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:28.370 01:18:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:28.370 01:18:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:28.628 01:18:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:28.628 { 00:18:28.628 "name": "nvme0n1", 00:18:28.628 "aliases": [ 00:18:28.628 "ee2c5b86-362f-4a43-bfe3-db66edadf135" 00:18:28.628 ], 00:18:28.628 "product_name": "NVMe disk", 00:18:28.628 "block_size": 4096, 00:18:28.628 "num_blocks": 1310720, 00:18:28.628 "uuid": "ee2c5b86-362f-4a43-bfe3-db66edadf135", 00:18:28.628 "numa_id": -1, 00:18:28.628 "assigned_rate_limits": { 00:18:28.628 "rw_ios_per_sec": 0, 00:18:28.628 "rw_mbytes_per_sec": 0, 00:18:28.628 "r_mbytes_per_sec": 0, 00:18:28.628 "w_mbytes_per_sec": 0 00:18:28.628 }, 00:18:28.628 "claimed": true, 00:18:28.628 "claim_type": "read_many_write_one", 00:18:28.628 "zoned": false, 00:18:28.628 "supported_io_types": { 00:18:28.628 "read": true, 00:18:28.628 "write": true, 00:18:28.628 "unmap": true, 00:18:28.628 "flush": true, 00:18:28.628 "reset": true, 00:18:28.628 "nvme_admin": true, 00:18:28.628 "nvme_io": true, 00:18:28.628 "nvme_io_md": false, 00:18:28.628 "write_zeroes": true, 00:18:28.628 "zcopy": false, 00:18:28.628 "get_zone_info": false, 00:18:28.628 "zone_management": false, 00:18:28.628 "zone_append": false, 00:18:28.628 "compare": true, 00:18:28.628 "compare_and_write": false, 00:18:28.628 "abort": true, 00:18:28.628 "seek_hole": false, 00:18:28.628 "seek_data": false, 00:18:28.628 "copy": true, 00:18:28.628 "nvme_iov_md": false 00:18:28.628 }, 00:18:28.628 "driver_specific": { 00:18:28.628 "nvme": [ 00:18:28.628 { 00:18:28.628 "pci_address": "0000:00:11.0", 00:18:28.628 "trid": { 00:18:28.628 "trtype": "PCIe", 00:18:28.628 "traddr": "0000:00:11.0" 00:18:28.629 }, 00:18:28.629 "ctrlr_data": { 00:18:28.629 "cntlid": 0, 00:18:28.629 "vendor_id": "0x1b36", 00:18:28.629 "model_number": "QEMU NVMe Ctrl", 00:18:28.629 "serial_number": "12341", 00:18:28.629 "firmware_revision": "8.0.0", 00:18:28.629 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:28.629 "oacs": { 00:18:28.629 "security": 0, 00:18:28.629 "format": 1, 00:18:28.629 "firmware": 0, 00:18:28.629 "ns_manage": 1 00:18:28.629 }, 00:18:28.629 "multi_ctrlr": false, 00:18:28.629 "ana_reporting": false 00:18:28.629 }, 00:18:28.629 "vs": { 00:18:28.629 "nvme_version": "1.4" 00:18:28.629 }, 00:18:28.629 "ns_data": { 00:18:28.629 "id": 1, 00:18:28.629 "can_share": false 00:18:28.629 } 00:18:28.629 } 00:18:28.629 ], 00:18:28.629 "mp_policy": "active_passive" 00:18:28.629 } 00:18:28.629 } 00:18:28.629 ]' 00:18:28.629 01:18:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:28.629 01:18:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:28.629 01:18:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:28.629 01:18:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:28.629 01:18:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:28.629 01:18:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:18:28.629 01:18:02 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:28.629 01:18:02 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:28.629 01:18:02 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:28.629 01:18:02 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:28.629 01:18:02 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:28.887 01:18:02 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=ff817681-8bb9-4599-8a4d-3c5ee1022593 00:18:28.887 01:18:02 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:28.887 01:18:02 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ff817681-8bb9-4599-8a4d-3c5ee1022593 00:18:29.147 01:18:02 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:29.408 01:18:02 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=ed196f48-a004-4f8c-832e-8d579cceee07 00:18:29.408 01:18:02 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ed196f48-a004-4f8c-832e-8d579cceee07 00:18:29.669 01:18:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=2d2fb2cb-af0e-4bd4-b5cb-d1789eb0c290 00:18:29.669 01:18:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2d2fb2cb-af0e-4bd4-b5cb-d1789eb0c290 00:18:29.669 01:18:03 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:29.669 01:18:03 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:29.669 01:18:03 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=2d2fb2cb-af0e-4bd4-b5cb-d1789eb0c290 00:18:29.669 01:18:03 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:29.669 01:18:03 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 2d2fb2cb-af0e-4bd4-b5cb-d1789eb0c290 00:18:29.669 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2d2fb2cb-af0e-4bd4-b5cb-d1789eb0c290 00:18:29.669 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:29.669 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:29.669 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:29.669 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2d2fb2cb-af0e-4bd4-b5cb-d1789eb0c290 00:18:29.931 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:29.931 { 00:18:29.931 "name": "2d2fb2cb-af0e-4bd4-b5cb-d1789eb0c290", 00:18:29.931 "aliases": [ 00:18:29.931 "lvs/nvme0n1p0" 00:18:29.931 ], 00:18:29.931 "product_name": "Logical Volume", 00:18:29.931 "block_size": 4096, 00:18:29.931 "num_blocks": 26476544, 00:18:29.931 "uuid": "2d2fb2cb-af0e-4bd4-b5cb-d1789eb0c290", 00:18:29.931 "assigned_rate_limits": { 00:18:29.931 "rw_ios_per_sec": 0, 00:18:29.931 "rw_mbytes_per_sec": 0, 00:18:29.931 "r_mbytes_per_sec": 0, 00:18:29.931 "w_mbytes_per_sec": 0 00:18:29.931 }, 00:18:29.931 "claimed": false, 00:18:29.931 "zoned": false, 00:18:29.931 "supported_io_types": { 00:18:29.931 "read": true, 00:18:29.931 "write": true, 00:18:29.931 "unmap": true, 00:18:29.931 "flush": false, 00:18:29.931 "reset": true, 00:18:29.931 "nvme_admin": false, 00:18:29.931 "nvme_io": false, 00:18:29.931 "nvme_io_md": false, 00:18:29.931 "write_zeroes": true, 00:18:29.931 "zcopy": false, 00:18:29.931 "get_zone_info": false, 00:18:29.931 "zone_management": false, 00:18:29.931 "zone_append": false, 00:18:29.931 "compare": false, 00:18:29.931 "compare_and_write": false, 00:18:29.931 "abort": false, 00:18:29.931 "seek_hole": true, 00:18:29.931 "seek_data": true, 00:18:29.931 "copy": false, 00:18:29.931 "nvme_iov_md": false 00:18:29.931 }, 00:18:29.931 "driver_specific": { 00:18:29.931 "lvol": { 00:18:29.931 "lvol_store_uuid": "ed196f48-a004-4f8c-832e-8d579cceee07", 00:18:29.931 "base_bdev": "nvme0n1", 00:18:29.931 "thin_provision": true, 00:18:29.931 "num_allocated_clusters": 0, 00:18:29.931 "snapshot": false, 00:18:29.931 "clone": false, 00:18:29.931 "esnap_clone": false 00:18:29.931 } 00:18:29.931 } 00:18:29.931 } 00:18:29.931 ]' 00:18:29.931 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:29.931 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:29.931 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:29.931 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:29.931 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:29.931 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:29.931 01:18:03 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:29.931 01:18:03 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:29.931 01:18:03 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:30.191 01:18:03 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:30.191 01:18:03 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:30.191 01:18:03 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 2d2fb2cb-af0e-4bd4-b5cb-d1789eb0c290 00:18:30.191 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2d2fb2cb-af0e-4bd4-b5cb-d1789eb0c290 00:18:30.191 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:30.191 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:30.191 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:30.191 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2d2fb2cb-af0e-4bd4-b5cb-d1789eb0c290 00:18:30.451 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:30.451 { 00:18:30.451 "name": "2d2fb2cb-af0e-4bd4-b5cb-d1789eb0c290", 00:18:30.451 "aliases": [ 00:18:30.451 "lvs/nvme0n1p0" 00:18:30.451 ], 00:18:30.451 "product_name": "Logical Volume", 00:18:30.451 "block_size": 4096, 00:18:30.451 "num_blocks": 26476544, 00:18:30.451 "uuid": "2d2fb2cb-af0e-4bd4-b5cb-d1789eb0c290", 00:18:30.451 "assigned_rate_limits": { 00:18:30.451 "rw_ios_per_sec": 0, 00:18:30.451 "rw_mbytes_per_sec": 0, 00:18:30.451 "r_mbytes_per_sec": 0, 00:18:30.451 "w_mbytes_per_sec": 0 00:18:30.451 }, 00:18:30.451 "claimed": false, 00:18:30.451 "zoned": false, 00:18:30.451 "supported_io_types": { 00:18:30.451 "read": true, 00:18:30.451 "write": true, 00:18:30.451 "unmap": true, 00:18:30.451 "flush": false, 00:18:30.451 "reset": true, 00:18:30.451 "nvme_admin": false, 00:18:30.451 "nvme_io": false, 00:18:30.451 "nvme_io_md": false, 00:18:30.451 "write_zeroes": true, 00:18:30.451 "zcopy": false, 00:18:30.451 "get_zone_info": false, 00:18:30.451 "zone_management": false, 00:18:30.451 "zone_append": false, 00:18:30.451 "compare": false, 00:18:30.451 "compare_and_write": false, 00:18:30.451 "abort": false, 00:18:30.451 "seek_hole": true, 00:18:30.451 "seek_data": true, 00:18:30.451 "copy": false, 00:18:30.451 "nvme_iov_md": false 00:18:30.451 }, 00:18:30.451 "driver_specific": { 00:18:30.451 "lvol": { 00:18:30.451 "lvol_store_uuid": "ed196f48-a004-4f8c-832e-8d579cceee07", 00:18:30.451 "base_bdev": "nvme0n1", 00:18:30.451 "thin_provision": true, 00:18:30.451 "num_allocated_clusters": 0, 00:18:30.451 "snapshot": false, 00:18:30.451 "clone": false, 00:18:30.451 "esnap_clone": false 00:18:30.451 } 00:18:30.451 } 00:18:30.451 } 00:18:30.451 ]' 00:18:30.451 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:30.451 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:30.451 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:30.451 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:30.451 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:30.451 01:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:30.451 01:18:03 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:30.451 01:18:03 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:30.713 01:18:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:18:30.713 01:18:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 2d2fb2cb-af0e-4bd4-b5cb-d1789eb0c290 00:18:30.713 01:18:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2d2fb2cb-af0e-4bd4-b5cb-d1789eb0c290 00:18:30.713 01:18:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:30.713 01:18:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:30.713 01:18:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:30.713 01:18:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2d2fb2cb-af0e-4bd4-b5cb-d1789eb0c290 00:18:30.973 01:18:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:30.973 { 00:18:30.973 "name": "2d2fb2cb-af0e-4bd4-b5cb-d1789eb0c290", 00:18:30.973 "aliases": [ 00:18:30.973 "lvs/nvme0n1p0" 00:18:30.973 ], 00:18:30.973 "product_name": "Logical Volume", 00:18:30.973 "block_size": 4096, 00:18:30.973 "num_blocks": 26476544, 00:18:30.973 "uuid": "2d2fb2cb-af0e-4bd4-b5cb-d1789eb0c290", 00:18:30.973 "assigned_rate_limits": { 00:18:30.973 "rw_ios_per_sec": 0, 00:18:30.973 "rw_mbytes_per_sec": 0, 00:18:30.973 "r_mbytes_per_sec": 0, 00:18:30.973 "w_mbytes_per_sec": 0 00:18:30.973 }, 00:18:30.973 "claimed": false, 00:18:30.973 "zoned": false, 00:18:30.973 "supported_io_types": { 00:18:30.973 "read": true, 00:18:30.973 "write": true, 00:18:30.973 "unmap": true, 00:18:30.973 "flush": false, 00:18:30.973 "reset": true, 00:18:30.974 "nvme_admin": false, 00:18:30.974 "nvme_io": false, 00:18:30.974 "nvme_io_md": false, 00:18:30.974 "write_zeroes": true, 00:18:30.974 "zcopy": false, 00:18:30.974 "get_zone_info": false, 00:18:30.974 "zone_management": false, 00:18:30.974 "zone_append": false, 00:18:30.974 "compare": false, 00:18:30.974 "compare_and_write": false, 00:18:30.974 "abort": false, 00:18:30.974 "seek_hole": true, 00:18:30.974 "seek_data": true, 00:18:30.974 "copy": false, 00:18:30.974 "nvme_iov_md": false 00:18:30.974 }, 00:18:30.974 "driver_specific": { 00:18:30.974 "lvol": { 00:18:30.974 "lvol_store_uuid": "ed196f48-a004-4f8c-832e-8d579cceee07", 00:18:30.974 "base_bdev": "nvme0n1", 00:18:30.974 "thin_provision": true, 00:18:30.974 "num_allocated_clusters": 0, 00:18:30.974 "snapshot": false, 00:18:30.974 "clone": false, 00:18:30.974 "esnap_clone": false 00:18:30.974 } 00:18:30.974 } 00:18:30.974 } 00:18:30.974 ]' 00:18:30.974 01:18:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:30.974 01:18:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:30.974 01:18:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:30.974 01:18:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:30.974 01:18:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:30.974 01:18:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:30.974 01:18:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:30.974 01:18:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2d2fb2cb-af0e-4bd4-b5cb-d1789eb0c290 -c nvc0n1p0 --l2p_dram_limit 20 00:18:30.974 [2024-12-14 01:18:04.573120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.974 [2024-12-14 01:18:04.573159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:30.974 [2024-12-14 01:18:04.573171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:30.974 [2024-12-14 01:18:04.573177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.974 [2024-12-14 01:18:04.573221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.974 [2024-12-14 01:18:04.573229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:30.974 [2024-12-14 01:18:04.573238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:30.974 [2024-12-14 01:18:04.573244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.974 [2024-12-14 01:18:04.573260] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:30.974 [2024-12-14 01:18:04.573471] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:30.974 [2024-12-14 01:18:04.573484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.974 [2024-12-14 01:18:04.573492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:30.974 [2024-12-14 01:18:04.573499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:18:30.974 [2024-12-14 01:18:04.573505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.974 [2024-12-14 01:18:04.573536] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7e926a12-ac24-4c4d-b124-447972bad40c 00:18:30.974 [2024-12-14 01:18:04.574509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.974 [2024-12-14 01:18:04.574609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:30.974 [2024-12-14 01:18:04.574632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:30.974 [2024-12-14 01:18:04.574641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.974 [2024-12-14 01:18:04.579419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.974 [2024-12-14 01:18:04.579451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:30.974 [2024-12-14 01:18:04.579458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.723 ms 00:18:30.974 [2024-12-14 01:18:04.579468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.974 [2024-12-14 01:18:04.579523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.974 [2024-12-14 01:18:04.579531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:30.974 [2024-12-14 01:18:04.579542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:30.974 [2024-12-14 01:18:04.579553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.974 [2024-12-14 01:18:04.579588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.974 [2024-12-14 01:18:04.579597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:30.974 [2024-12-14 01:18:04.579606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:30.974 [2024-12-14 01:18:04.579613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.974 [2024-12-14 01:18:04.579638] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:30.974 [2024-12-14 01:18:04.580903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.974 [2024-12-14 01:18:04.580930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:30.974 [2024-12-14 01:18:04.580940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.268 ms 00:18:30.974 [2024-12-14 01:18:04.580947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.974 [2024-12-14 01:18:04.580972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.974 [2024-12-14 01:18:04.580981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:30.974 [2024-12-14 01:18:04.580990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:30.974 [2024-12-14 01:18:04.580996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.974 [2024-12-14 01:18:04.581015] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:30.974 [2024-12-14 01:18:04.581132] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:30.974 [2024-12-14 01:18:04.581143] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:30.974 [2024-12-14 01:18:04.581152] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:30.974 [2024-12-14 01:18:04.581161] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:30.974 [2024-12-14 01:18:04.581168] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:30.974 [2024-12-14 01:18:04.581175] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:30.974 [2024-12-14 01:18:04.581181] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:30.974 [2024-12-14 01:18:04.581188] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:30.974 [2024-12-14 01:18:04.581197] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:30.974 [2024-12-14 01:18:04.581204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.974 [2024-12-14 01:18:04.581212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:30.974 [2024-12-14 01:18:04.581220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:18:30.974 [2024-12-14 01:18:04.581226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.974 [2024-12-14 01:18:04.581292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.974 [2024-12-14 01:18:04.581300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:30.974 [2024-12-14 01:18:04.581307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:30.974 [2024-12-14 01:18:04.581312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.974 [2024-12-14 01:18:04.581400] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:30.974 [2024-12-14 01:18:04.581407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:30.974 [2024-12-14 01:18:04.581417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:30.974 [2024-12-14 01:18:04.581424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.974 [2024-12-14 01:18:04.581431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:30.974 [2024-12-14 01:18:04.581436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:30.974 [2024-12-14 01:18:04.581442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:30.974 [2024-12-14 01:18:04.581447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:30.974 [2024-12-14 01:18:04.581453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:30.974 [2024-12-14 01:18:04.581458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:30.974 [2024-12-14 01:18:04.581464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:30.974 [2024-12-14 01:18:04.581470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:30.974 [2024-12-14 01:18:04.581478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:30.974 [2024-12-14 01:18:04.581483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:30.974 [2024-12-14 01:18:04.581489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:30.974 [2024-12-14 01:18:04.581494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.974 [2024-12-14 01:18:04.581502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:30.974 [2024-12-14 01:18:04.581507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:30.974 [2024-12-14 01:18:04.581513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.974 [2024-12-14 01:18:04.581518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:30.974 [2024-12-14 01:18:04.581524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:30.974 [2024-12-14 01:18:04.581530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:30.974 [2024-12-14 01:18:04.581537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:30.974 [2024-12-14 01:18:04.581542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:30.974 [2024-12-14 01:18:04.581548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:30.974 [2024-12-14 01:18:04.581554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:30.974 [2024-12-14 01:18:04.581561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:30.974 [2024-12-14 01:18:04.581567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:30.974 [2024-12-14 01:18:04.581575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:30.974 [2024-12-14 01:18:04.581581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:30.975 [2024-12-14 01:18:04.581588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:30.975 [2024-12-14 01:18:04.581593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:30.975 [2024-12-14 01:18:04.581600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:30.975 [2024-12-14 01:18:04.581606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:30.975 [2024-12-14 01:18:04.581613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:30.975 [2024-12-14 01:18:04.581618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:30.975 [2024-12-14 01:18:04.581794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:30.975 [2024-12-14 01:18:04.581813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:30.975 [2024-12-14 01:18:04.581831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:30.975 [2024-12-14 01:18:04.581848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.975 [2024-12-14 01:18:04.581865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:30.975 [2024-12-14 01:18:04.581881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:30.975 [2024-12-14 01:18:04.581898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.975 [2024-12-14 01:18:04.581914] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:30.975 [2024-12-14 01:18:04.581934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:30.975 [2024-12-14 01:18:04.581997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:30.975 [2024-12-14 01:18:04.582016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.975 [2024-12-14 01:18:04.582031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:30.975 [2024-12-14 01:18:04.582047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:30.975 [2024-12-14 01:18:04.582061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:30.975 [2024-12-14 01:18:04.582077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:30.975 [2024-12-14 01:18:04.582090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:30.975 [2024-12-14 01:18:04.582106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:30.975 [2024-12-14 01:18:04.582122] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:30.975 [2024-12-14 01:18:04.582147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:30.975 [2024-12-14 01:18:04.582203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:30.975 [2024-12-14 01:18:04.582228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:30.975 [2024-12-14 01:18:04.582250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:30.975 [2024-12-14 01:18:04.582272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:30.975 [2024-12-14 01:18:04.582325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:30.975 [2024-12-14 01:18:04.582352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:30.975 [2024-12-14 01:18:04.582373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:30.975 [2024-12-14 01:18:04.582401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:30.975 [2024-12-14 01:18:04.582448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:30.975 [2024-12-14 01:18:04.582473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:30.975 [2024-12-14 01:18:04.582495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:30.975 [2024-12-14 01:18:04.582519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:30.975 [2024-12-14 01:18:04.582541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:30.975 [2024-12-14 01:18:04.582564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:30.975 [2024-12-14 01:18:04.582616] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:30.975 [2024-12-14 01:18:04.582656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:30.975 [2024-12-14 01:18:04.582679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:30.975 [2024-12-14 01:18:04.582702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:30.975 [2024-12-14 01:18:04.582724] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:31.235 [2024-12-14 01:18:04.582747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:31.235 [2024-12-14 01:18:04.582764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.235 [2024-12-14 01:18:04.582776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:31.235 [2024-12-14 01:18:04.582783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.432 ms 00:18:31.235 [2024-12-14 01:18:04.582792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.235 [2024-12-14 01:18:04.582821] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:31.235 [2024-12-14 01:18:04.582830] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:35.437 [2024-12-14 01:18:08.208777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.437 [2024-12-14 01:18:08.208868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:35.437 [2024-12-14 01:18:08.208890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3625.940 ms 00:18:35.437 [2024-12-14 01:18:08.208903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.437 [2024-12-14 01:18:08.223463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.437 [2024-12-14 01:18:08.223740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:35.437 [2024-12-14 01:18:08.223765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.431 ms 00:18:35.437 [2024-12-14 01:18:08.223779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.437 [2024-12-14 01:18:08.223904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.437 [2024-12-14 01:18:08.223926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:35.437 [2024-12-14 01:18:08.223939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:35.437 [2024-12-14 01:18:08.223950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.437 [2024-12-14 01:18:08.249855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.437 [2024-12-14 01:18:08.249951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:35.437 [2024-12-14 01:18:08.249985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.861 ms 00:18:35.437 [2024-12-14 01:18:08.250008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.437 [2024-12-14 01:18:08.250078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.437 [2024-12-14 01:18:08.250109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:35.437 [2024-12-14 01:18:08.250128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:35.437 [2024-12-14 01:18:08.250148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.437 [2024-12-14 01:18:08.250870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.437 [2024-12-14 01:18:08.250929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:35.437 [2024-12-14 01:18:08.250951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:18:35.437 [2024-12-14 01:18:08.250976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.437 [2024-12-14 01:18:08.251225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.437 [2024-12-14 01:18:08.251258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:35.437 [2024-12-14 01:18:08.251282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:18:35.437 [2024-12-14 01:18:08.251306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.437 [2024-12-14 01:18:08.259820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.437 [2024-12-14 01:18:08.259869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:35.437 [2024-12-14 01:18:08.259880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.480 ms 00:18:35.437 [2024-12-14 01:18:08.259894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.437 [2024-12-14 01:18:08.269947] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:35.437 [2024-12-14 01:18:08.277762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.437 [2024-12-14 01:18:08.277804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:35.437 [2024-12-14 01:18:08.277818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.796 ms 00:18:35.437 [2024-12-14 01:18:08.277826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.437 [2024-12-14 01:18:08.369021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.437 [2024-12-14 01:18:08.369088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:35.437 [2024-12-14 01:18:08.369109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.158 ms 00:18:35.437 [2024-12-14 01:18:08.369121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.437 [2024-12-14 01:18:08.369331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.437 [2024-12-14 01:18:08.369343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:35.437 [2024-12-14 01:18:08.369355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:18:35.437 [2024-12-14 01:18:08.369376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.437 [2024-12-14 01:18:08.376353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.437 [2024-12-14 01:18:08.376410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:35.437 [2024-12-14 01:18:08.376425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.932 ms 00:18:35.437 [2024-12-14 01:18:08.376434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.437 [2024-12-14 01:18:08.382359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.437 [2024-12-14 01:18:08.382413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:35.437 [2024-12-14 01:18:08.382427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.863 ms 00:18:35.437 [2024-12-14 01:18:08.382434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.437 [2024-12-14 01:18:08.382855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.437 [2024-12-14 01:18:08.382880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:35.437 [2024-12-14 01:18:08.382895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:18:35.437 [2024-12-14 01:18:08.382913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.438 [2024-12-14 01:18:08.430545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.438 [2024-12-14 01:18:08.430815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:35.438 [2024-12-14 01:18:08.430869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.602 ms 00:18:35.438 [2024-12-14 01:18:08.430878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.438 [2024-12-14 01:18:08.439040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.438 [2024-12-14 01:18:08.439103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:35.438 [2024-12-14 01:18:08.439117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.093 ms 00:18:35.438 [2024-12-14 01:18:08.439130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.438 [2024-12-14 01:18:08.445939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.438 [2024-12-14 01:18:08.446135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:35.438 [2024-12-14 01:18:08.446160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.775 ms 00:18:35.438 [2024-12-14 01:18:08.446168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.438 [2024-12-14 01:18:08.453479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.438 [2024-12-14 01:18:08.453699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:35.438 [2024-12-14 01:18:08.453728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.276 ms 00:18:35.438 [2024-12-14 01:18:08.453737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.438 [2024-12-14 01:18:08.453771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.438 [2024-12-14 01:18:08.453785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:35.438 [2024-12-14 01:18:08.453797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:35.438 [2024-12-14 01:18:08.453805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.438 [2024-12-14 01:18:08.453878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.438 [2024-12-14 01:18:08.453887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:35.438 [2024-12-14 01:18:08.453897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:35.438 [2024-12-14 01:18:08.453905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.438 [2024-12-14 01:18:08.455064] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3881.400 ms, result 0 00:18:35.438 { 00:18:35.438 "name": "ftl0", 00:18:35.438 "uuid": "7e926a12-ac24-4c4d-b124-447972bad40c" 00:18:35.438 } 00:18:35.438 01:18:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:35.438 01:18:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:18:35.438 01:18:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:18:35.438 01:18:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:35.438 [2024-12-14 01:18:08.788926] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:35.438 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:35.438 Zero copy mechanism will not be used. 00:18:35.438 Running I/O for 4 seconds... 00:18:37.330 917.00 IOPS, 60.89 MiB/s [2024-12-14T01:18:11.885Z] 1353.00 IOPS, 89.85 MiB/s [2024-12-14T01:18:12.954Z] 1278.67 IOPS, 84.91 MiB/s [2024-12-14T01:18:12.954Z] 1557.75 IOPS, 103.44 MiB/s 00:18:39.342 Latency(us) 00:18:39.342 [2024-12-14T01:18:12.954Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.342 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:39.342 ftl0 : 4.00 1557.04 103.40 0.00 0.00 676.01 151.24 3251.59 00:18:39.342 [2024-12-14T01:18:12.954Z] =================================================================================================================== 00:18:39.342 [2024-12-14T01:18:12.954Z] Total : 1557.04 103.40 0.00 0.00 676.01 151.24 3251.59 00:18:39.342 [2024-12-14 01:18:12.798640] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:39.342 { 00:18:39.342 "results": [ 00:18:39.342 { 00:18:39.342 "job": "ftl0", 00:18:39.342 "core_mask": "0x1", 00:18:39.342 "workload": "randwrite", 00:18:39.342 "status": "finished", 00:18:39.342 "queue_depth": 1, 00:18:39.342 "io_size": 69632, 00:18:39.342 "runtime": 4.002455, 00:18:39.342 "iops": 1557.0443640215817, 00:18:39.342 "mibps": 103.39747729830816, 00:18:39.342 "io_failed": 0, 00:18:39.342 "io_timeout": 0, 00:18:39.342 "avg_latency_us": 676.0140456206182, 00:18:39.342 "min_latency_us": 151.2369230769231, 00:18:39.342 "max_latency_us": 3251.5938461538462 00:18:39.342 } 00:18:39.342 ], 00:18:39.342 "core_count": 1 00:18:39.342 } 00:18:39.342 01:18:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:39.342 [2024-12-14 01:18:12.915963] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:39.342 Running I/O for 4 seconds... 00:18:41.679 5405.00 IOPS, 21.11 MiB/s [2024-12-14T01:18:16.238Z] 5616.00 IOPS, 21.94 MiB/s [2024-12-14T01:18:17.188Z] 5736.67 IOPS, 22.41 MiB/s [2024-12-14T01:18:17.188Z] 5731.00 IOPS, 22.39 MiB/s 00:18:43.576 Latency(us) 00:18:43.576 [2024-12-14T01:18:17.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.576 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:43.576 ftl0 : 4.03 5712.27 22.31 0.00 0.00 22307.56 329.26 52025.50 00:18:43.576 [2024-12-14T01:18:17.188Z] =================================================================================================================== 00:18:43.576 [2024-12-14T01:18:17.188Z] Total : 5712.27 22.31 0.00 0.00 22307.56 0.00 52025.50 00:18:43.576 [2024-12-14 01:18:16.958618] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:43.576 { 00:18:43.576 "results": [ 00:18:43.576 { 00:18:43.576 "job": "ftl0", 00:18:43.576 "core_mask": "0x1", 00:18:43.576 "workload": "randwrite", 00:18:43.576 "status": "finished", 00:18:43.576 "queue_depth": 128, 00:18:43.576 "io_size": 4096, 00:18:43.576 "runtime": 4.035, 00:18:43.576 "iops": 5712.267657992565, 00:18:43.576 "mibps": 22.313545539033456, 00:18:43.576 "io_failed": 0, 00:18:43.576 "io_timeout": 0, 00:18:43.576 "avg_latency_us": 22307.563407189366, 00:18:43.576 "min_latency_us": 329.2553846153846, 00:18:43.576 "max_latency_us": 52025.50153846154 00:18:43.576 } 00:18:43.576 ], 00:18:43.576 "core_count": 1 00:18:43.576 } 00:18:43.576 01:18:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:43.576 [2024-12-14 01:18:17.064338] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:43.576 Running I/O for 4 seconds... 00:18:45.467 6140.00 IOPS, 23.98 MiB/s [2024-12-14T01:18:20.469Z] 5271.50 IOPS, 20.59 MiB/s [2024-12-14T01:18:21.417Z] 5114.67 IOPS, 19.98 MiB/s [2024-12-14T01:18:21.417Z] 5018.25 IOPS, 19.60 MiB/s 00:18:47.805 Latency(us) 00:18:47.805 [2024-12-14T01:18:21.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.805 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:47.805 Verification LBA range: start 0x0 length 0x1400000 00:18:47.805 ftl0 : 4.02 5027.20 19.64 0.00 0.00 25376.26 230.01 57268.38 00:18:47.805 [2024-12-14T01:18:21.417Z] =================================================================================================================== 00:18:47.805 [2024-12-14T01:18:21.417Z] Total : 5027.20 19.64 0.00 0.00 25376.26 0.00 57268.38 00:18:47.805 [2024-12-14 01:18:21.090424] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:47.805 { 00:18:47.805 "results": [ 00:18:47.805 { 00:18:47.805 "job": "ftl0", 00:18:47.805 "core_mask": "0x1", 00:18:47.805 "workload": "verify", 00:18:47.805 "status": "finished", 00:18:47.805 "verify_range": { 00:18:47.805 "start": 0, 00:18:47.805 "length": 20971520 00:18:47.805 }, 00:18:47.805 "queue_depth": 128, 00:18:47.805 "io_size": 4096, 00:18:47.805 "runtime": 4.017541, 00:18:47.805 "iops": 5027.204451678278, 00:18:47.805 "mibps": 19.637517389368274, 00:18:47.805 "io_failed": 0, 00:18:47.805 "io_timeout": 0, 00:18:47.805 "avg_latency_us": 25376.260401202006, 00:18:47.805 "min_latency_us": 230.00615384615384, 00:18:47.805 "max_latency_us": 57268.38153846154 00:18:47.805 } 00:18:47.805 ], 00:18:47.805 "core_count": 1 00:18:47.805 } 00:18:47.805 01:18:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:47.805 [2024-12-14 01:18:21.310790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.805 [2024-12-14 01:18:21.310842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:47.805 [2024-12-14 01:18:21.310858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:47.805 [2024-12-14 01:18:21.310867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.805 [2024-12-14 01:18:21.310895] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:47.805 [2024-12-14 01:18:21.311554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.805 [2024-12-14 01:18:21.311595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:47.805 [2024-12-14 01:18:21.311608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:18:47.805 [2024-12-14 01:18:21.311619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.805 [2024-12-14 01:18:21.314795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.805 [2024-12-14 01:18:21.314853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:47.805 [2024-12-14 01:18:21.314866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.133 ms 00:18:47.805 [2024-12-14 01:18:21.314881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.069 [2024-12-14 01:18:21.541916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.069 [2024-12-14 01:18:21.542132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:48.069 [2024-12-14 01:18:21.542158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 227.016 ms 00:18:48.069 [2024-12-14 01:18:21.542169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.069 [2024-12-14 01:18:21.548342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.069 [2024-12-14 01:18:21.548392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:48.069 [2024-12-14 01:18:21.548404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.092 ms 00:18:48.069 [2024-12-14 01:18:21.548414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.069 [2024-12-14 01:18:21.551436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.069 [2024-12-14 01:18:21.551492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:48.069 [2024-12-14 01:18:21.551502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.965 ms 00:18:48.069 [2024-12-14 01:18:21.551512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.069 [2024-12-14 01:18:21.557830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.069 [2024-12-14 01:18:21.557892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:48.069 [2024-12-14 01:18:21.557909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.275 ms 00:18:48.069 [2024-12-14 01:18:21.557923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.069 [2024-12-14 01:18:21.558050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.069 [2024-12-14 01:18:21.558065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:48.069 [2024-12-14 01:18:21.558074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:18:48.069 [2024-12-14 01:18:21.558084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.069 [2024-12-14 01:18:21.561252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.069 [2024-12-14 01:18:21.561305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:48.069 [2024-12-14 01:18:21.561316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.152 ms 00:18:48.069 [2024-12-14 01:18:21.561325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.069 [2024-12-14 01:18:21.563977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.070 [2024-12-14 01:18:21.564028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:48.070 [2024-12-14 01:18:21.564037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.611 ms 00:18:48.070 [2024-12-14 01:18:21.564047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.070 [2024-12-14 01:18:21.566216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.070 [2024-12-14 01:18:21.566268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:48.070 [2024-12-14 01:18:21.566277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.130 ms 00:18:48.070 [2024-12-14 01:18:21.566289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.070 [2024-12-14 01:18:21.568224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.070 [2024-12-14 01:18:21.568280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:48.070 [2024-12-14 01:18:21.568290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.873 ms 00:18:48.070 [2024-12-14 01:18:21.568299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.070 [2024-12-14 01:18:21.568336] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:48.070 [2024-12-14 01:18:21.568357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:48.070 [2024-12-14 01:18:21.568965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.568973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.568983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.568991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:48.071 [2024-12-14 01:18:21.569280] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:48.071 [2024-12-14 01:18:21.569288] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7e926a12-ac24-4c4d-b124-447972bad40c 00:18:48.071 [2024-12-14 01:18:21.569305] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:48.071 [2024-12-14 01:18:21.569313] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:48.071 [2024-12-14 01:18:21.569322] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:48.071 [2024-12-14 01:18:21.569330] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:48.071 [2024-12-14 01:18:21.569341] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:48.071 [2024-12-14 01:18:21.569350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:48.071 [2024-12-14 01:18:21.569359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:48.071 [2024-12-14 01:18:21.569365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:48.071 [2024-12-14 01:18:21.569389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:48.071 [2024-12-14 01:18:21.569397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.071 [2024-12-14 01:18:21.569414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:48.071 [2024-12-14 01:18:21.569425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:18:48.071 [2024-12-14 01:18:21.569434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.071 [2024-12-14 01:18:21.571968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.071 [2024-12-14 01:18:21.572112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:48.071 [2024-12-14 01:18:21.572177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.515 ms 00:18:48.071 [2024-12-14 01:18:21.572205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.071 [2024-12-14 01:18:21.572361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.071 [2024-12-14 01:18:21.572401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:48.071 [2024-12-14 01:18:21.572492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:18:48.071 [2024-12-14 01:18:21.572521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.071 [2024-12-14 01:18:21.580031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.071 [2024-12-14 01:18:21.580202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:48.071 [2024-12-14 01:18:21.580611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.071 [2024-12-14 01:18:21.580686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.071 [2024-12-14 01:18:21.580832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.071 [2024-12-14 01:18:21.580914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:48.071 [2024-12-14 01:18:21.580977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.071 [2024-12-14 01:18:21.581003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.071 [2024-12-14 01:18:21.581116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.071 [2024-12-14 01:18:21.581158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:48.071 [2024-12-14 01:18:21.581231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.071 [2024-12-14 01:18:21.581258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.071 [2024-12-14 01:18:21.581290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.071 [2024-12-14 01:18:21.581312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:48.071 [2024-12-14 01:18:21.581389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.071 [2024-12-14 01:18:21.581419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.071 [2024-12-14 01:18:21.594998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.071 [2024-12-14 01:18:21.595188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:48.071 [2024-12-14 01:18:21.595245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.071 [2024-12-14 01:18:21.595271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.071 [2024-12-14 01:18:21.607020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.071 [2024-12-14 01:18:21.607188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:48.071 [2024-12-14 01:18:21.607249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.071 [2024-12-14 01:18:21.607274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.071 [2024-12-14 01:18:21.607373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.071 [2024-12-14 01:18:21.607402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:48.071 [2024-12-14 01:18:21.607423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.071 [2024-12-14 01:18:21.607451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.071 [2024-12-14 01:18:21.607610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.071 [2024-12-14 01:18:21.607670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:48.071 [2024-12-14 01:18:21.607692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.071 [2024-12-14 01:18:21.607720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.071 [2024-12-14 01:18:21.607836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.071 [2024-12-14 01:18:21.607866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:48.071 [2024-12-14 01:18:21.607891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.072 [2024-12-14 01:18:21.607975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.072 [2024-12-14 01:18:21.608038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.072 [2024-12-14 01:18:21.608065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:48.072 [2024-12-14 01:18:21.608131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.072 [2024-12-14 01:18:21.608158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.072 [2024-12-14 01:18:21.608217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.072 [2024-12-14 01:18:21.608241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:48.072 [2024-12-14 01:18:21.608293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.072 [2024-12-14 01:18:21.608317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.072 [2024-12-14 01:18:21.608382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.072 [2024-12-14 01:18:21.608409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:48.072 [2024-12-14 01:18:21.608430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.072 [2024-12-14 01:18:21.608456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.072 [2024-12-14 01:18:21.608617] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 297.779 ms, result 0 00:18:48.072 true 00:18:48.072 01:18:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 88582 00:18:48.072 01:18:21 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 88582 ']' 00:18:48.072 01:18:21 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 88582 00:18:48.072 01:18:21 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:18:48.072 01:18:21 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.072 01:18:21 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88582 00:18:48.072 killing process with pid 88582 00:18:48.072 Received shutdown signal, test time was about 4.000000 seconds 00:18:48.072 00:18:48.072 Latency(us) 00:18:48.072 [2024-12-14T01:18:21.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.072 [2024-12-14T01:18:21.684Z] =================================================================================================================== 00:18:48.072 [2024-12-14T01:18:21.684Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:48.072 01:18:21 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:48.072 01:18:21 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:48.072 01:18:21 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88582' 00:18:48.072 01:18:21 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 88582 00:18:48.072 01:18:21 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 88582 00:18:54.667 Remove shared memory files 00:18:54.667 01:18:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:54.667 01:18:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:18:54.667 01:18:27 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:54.667 01:18:27 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:18:54.667 01:18:27 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:18:54.667 01:18:27 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:18:54.667 01:18:27 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:54.667 01:18:27 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:18:54.667 ************************************ 00:18:54.667 END TEST ftl_bdevperf 00:18:54.667 ************************************ 00:18:54.667 00:18:54.667 real 0m26.666s 00:18:54.667 user 0m29.216s 00:18:54.667 sys 0m1.067s 00:18:54.667 01:18:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.667 01:18:27 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:54.667 01:18:27 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:54.667 01:18:27 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:54.667 01:18:27 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.667 01:18:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:54.667 ************************************ 00:18:54.667 START TEST ftl_trim 00:18:54.667 ************************************ 00:18:54.667 01:18:27 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:54.667 * Looking for test storage... 00:18:54.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:54.667 01:18:27 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:54.667 01:18:27 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:18:54.667 01:18:27 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:54.667 01:18:27 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:54.667 01:18:27 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:18:54.667 01:18:27 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:54.667 01:18:27 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:54.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.667 --rc genhtml_branch_coverage=1 00:18:54.667 --rc genhtml_function_coverage=1 00:18:54.667 --rc genhtml_legend=1 00:18:54.667 --rc geninfo_all_blocks=1 00:18:54.667 --rc geninfo_unexecuted_blocks=1 00:18:54.667 00:18:54.667 ' 00:18:54.667 01:18:27 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:54.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.667 --rc genhtml_branch_coverage=1 00:18:54.667 --rc genhtml_function_coverage=1 00:18:54.667 --rc genhtml_legend=1 00:18:54.667 --rc geninfo_all_blocks=1 00:18:54.667 --rc geninfo_unexecuted_blocks=1 00:18:54.667 00:18:54.667 ' 00:18:54.667 01:18:27 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:54.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.667 --rc genhtml_branch_coverage=1 00:18:54.667 --rc genhtml_function_coverage=1 00:18:54.667 --rc genhtml_legend=1 00:18:54.667 --rc geninfo_all_blocks=1 00:18:54.667 --rc geninfo_unexecuted_blocks=1 00:18:54.667 00:18:54.667 ' 00:18:54.667 01:18:27 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:54.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.667 --rc genhtml_branch_coverage=1 00:18:54.667 --rc genhtml_function_coverage=1 00:18:54.667 --rc genhtml_legend=1 00:18:54.667 --rc geninfo_all_blocks=1 00:18:54.667 --rc geninfo_unexecuted_blocks=1 00:18:54.667 00:18:54.667 ' 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:18:54.667 01:18:27 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:54.668 01:18:27 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:54.668 01:18:27 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:54.668 01:18:27 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:54.668 01:18:27 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:54.668 01:18:27 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:54.668 01:18:27 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:54.668 01:18:27 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:54.668 01:18:27 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=88928 00:18:54.668 01:18:27 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 88928 00:18:54.668 01:18:27 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 88928 ']' 00:18:54.668 01:18:27 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.668 01:18:27 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.668 01:18:27 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.668 01:18:27 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.668 01:18:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:54.668 01:18:27 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:54.668 [2024-12-14 01:18:27.519665] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:54.668 [2024-12-14 01:18:27.520248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88928 ] 00:18:54.668 [2024-12-14 01:18:27.668842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:54.668 [2024-12-14 01:18:27.700533] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.668 [2024-12-14 01:18:27.700889] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.668 [2024-12-14 01:18:27.700937] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.929 01:18:28 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.929 01:18:28 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:18:54.930 01:18:28 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:54.930 01:18:28 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:18:54.930 01:18:28 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:54.930 01:18:28 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:18:54.930 01:18:28 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:18:54.930 01:18:28 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:55.191 01:18:28 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:55.191 01:18:28 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:18:55.191 01:18:28 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:55.191 01:18:28 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:55.191 01:18:28 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:55.191 01:18:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:55.191 01:18:28 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:55.191 01:18:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:55.453 01:18:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:55.453 { 00:18:55.453 "name": "nvme0n1", 00:18:55.453 "aliases": [ 00:18:55.453 "977923f3-009b-405c-9149-4a6f50a40b9d" 00:18:55.453 ], 00:18:55.453 "product_name": "NVMe disk", 00:18:55.453 "block_size": 4096, 00:18:55.453 "num_blocks": 1310720, 00:18:55.453 "uuid": "977923f3-009b-405c-9149-4a6f50a40b9d", 00:18:55.453 "numa_id": -1, 00:18:55.453 "assigned_rate_limits": { 00:18:55.453 "rw_ios_per_sec": 0, 00:18:55.453 "rw_mbytes_per_sec": 0, 00:18:55.453 "r_mbytes_per_sec": 0, 00:18:55.453 "w_mbytes_per_sec": 0 00:18:55.453 }, 00:18:55.453 "claimed": true, 00:18:55.453 "claim_type": "read_many_write_one", 00:18:55.453 "zoned": false, 00:18:55.453 "supported_io_types": { 00:18:55.453 "read": true, 00:18:55.453 "write": true, 00:18:55.453 "unmap": true, 00:18:55.453 "flush": true, 00:18:55.453 "reset": true, 00:18:55.453 "nvme_admin": true, 00:18:55.453 "nvme_io": true, 00:18:55.453 "nvme_io_md": false, 00:18:55.453 "write_zeroes": true, 00:18:55.453 "zcopy": false, 00:18:55.453 "get_zone_info": false, 00:18:55.453 "zone_management": false, 00:18:55.453 "zone_append": false, 00:18:55.453 "compare": true, 00:18:55.453 "compare_and_write": false, 00:18:55.453 "abort": true, 00:18:55.453 "seek_hole": false, 00:18:55.453 "seek_data": false, 00:18:55.453 "copy": true, 00:18:55.453 "nvme_iov_md": false 00:18:55.453 }, 00:18:55.453 "driver_specific": { 00:18:55.453 "nvme": [ 00:18:55.453 { 00:18:55.453 "pci_address": "0000:00:11.0", 00:18:55.454 "trid": { 00:18:55.454 "trtype": "PCIe", 00:18:55.454 "traddr": "0000:00:11.0" 00:18:55.454 }, 00:18:55.454 "ctrlr_data": { 00:18:55.454 "cntlid": 0, 00:18:55.454 "vendor_id": "0x1b36", 00:18:55.454 "model_number": "QEMU NVMe Ctrl", 00:18:55.454 "serial_number": "12341", 00:18:55.454 "firmware_revision": "8.0.0", 00:18:55.454 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:55.454 "oacs": { 00:18:55.454 "security": 0, 00:18:55.454 "format": 1, 00:18:55.454 "firmware": 0, 00:18:55.454 "ns_manage": 1 00:18:55.454 }, 00:18:55.454 "multi_ctrlr": false, 00:18:55.454 "ana_reporting": false 00:18:55.454 }, 00:18:55.454 "vs": { 00:18:55.454 "nvme_version": "1.4" 00:18:55.454 }, 00:18:55.454 "ns_data": { 00:18:55.454 "id": 1, 00:18:55.454 "can_share": false 00:18:55.454 } 00:18:55.454 } 00:18:55.454 ], 00:18:55.454 "mp_policy": "active_passive" 00:18:55.454 } 00:18:55.454 } 00:18:55.454 ]' 00:18:55.454 01:18:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:55.454 01:18:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:55.454 01:18:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:55.454 01:18:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:55.454 01:18:28 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:55.454 01:18:28 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:18:55.454 01:18:28 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:18:55.454 01:18:28 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:55.454 01:18:28 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:18:55.454 01:18:28 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:55.454 01:18:28 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:55.716 01:18:29 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=ed196f48-a004-4f8c-832e-8d579cceee07 00:18:55.716 01:18:29 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:18:55.716 01:18:29 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ed196f48-a004-4f8c-832e-8d579cceee07 00:18:55.977 01:18:29 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:56.237 01:18:29 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=6dae33e5-23ba-4182-a602-bfbd5f907e6b 00:18:56.237 01:18:29 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6dae33e5-23ba-4182-a602-bfbd5f907e6b 00:18:56.237 01:18:29 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=e5c5ddd9-95a1-4a11-9526-8b950ad46c1b 00:18:56.237 01:18:29 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e5c5ddd9-95a1-4a11-9526-8b950ad46c1b 00:18:56.237 01:18:29 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:18:56.237 01:18:29 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:56.237 01:18:29 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=e5c5ddd9-95a1-4a11-9526-8b950ad46c1b 00:18:56.237 01:18:29 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:18:56.237 01:18:29 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size e5c5ddd9-95a1-4a11-9526-8b950ad46c1b 00:18:56.237 01:18:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=e5c5ddd9-95a1-4a11-9526-8b950ad46c1b 00:18:56.237 01:18:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:56.237 01:18:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:56.237 01:18:29 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:56.237 01:18:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e5c5ddd9-95a1-4a11-9526-8b950ad46c1b 00:18:56.496 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:56.496 { 00:18:56.496 "name": "e5c5ddd9-95a1-4a11-9526-8b950ad46c1b", 00:18:56.496 "aliases": [ 00:18:56.496 "lvs/nvme0n1p0" 00:18:56.496 ], 00:18:56.496 "product_name": "Logical Volume", 00:18:56.496 "block_size": 4096, 00:18:56.496 "num_blocks": 26476544, 00:18:56.496 "uuid": "e5c5ddd9-95a1-4a11-9526-8b950ad46c1b", 00:18:56.496 "assigned_rate_limits": { 00:18:56.496 "rw_ios_per_sec": 0, 00:18:56.496 "rw_mbytes_per_sec": 0, 00:18:56.496 "r_mbytes_per_sec": 0, 00:18:56.496 "w_mbytes_per_sec": 0 00:18:56.496 }, 00:18:56.496 "claimed": false, 00:18:56.496 "zoned": false, 00:18:56.496 "supported_io_types": { 00:18:56.496 "read": true, 00:18:56.496 "write": true, 00:18:56.496 "unmap": true, 00:18:56.496 "flush": false, 00:18:56.496 "reset": true, 00:18:56.496 "nvme_admin": false, 00:18:56.496 "nvme_io": false, 00:18:56.496 "nvme_io_md": false, 00:18:56.496 "write_zeroes": true, 00:18:56.496 "zcopy": false, 00:18:56.496 "get_zone_info": false, 00:18:56.496 "zone_management": false, 00:18:56.496 "zone_append": false, 00:18:56.496 "compare": false, 00:18:56.496 "compare_and_write": false, 00:18:56.496 "abort": false, 00:18:56.496 "seek_hole": true, 00:18:56.496 "seek_data": true, 00:18:56.496 "copy": false, 00:18:56.496 "nvme_iov_md": false 00:18:56.496 }, 00:18:56.496 "driver_specific": { 00:18:56.496 "lvol": { 00:18:56.496 "lvol_store_uuid": "6dae33e5-23ba-4182-a602-bfbd5f907e6b", 00:18:56.496 "base_bdev": "nvme0n1", 00:18:56.496 "thin_provision": true, 00:18:56.496 "num_allocated_clusters": 0, 00:18:56.496 "snapshot": false, 00:18:56.496 "clone": false, 00:18:56.496 "esnap_clone": false 00:18:56.496 } 00:18:56.496 } 00:18:56.496 } 00:18:56.496 ]' 00:18:56.496 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:56.496 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:56.496 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:56.496 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:56.496 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:56.496 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:56.496 01:18:30 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:18:56.496 01:18:30 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:18:56.496 01:18:30 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:56.755 01:18:30 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:56.755 01:18:30 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:56.755 01:18:30 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size e5c5ddd9-95a1-4a11-9526-8b950ad46c1b 00:18:56.755 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=e5c5ddd9-95a1-4a11-9526-8b950ad46c1b 00:18:56.755 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:56.755 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:56.755 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:56.755 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e5c5ddd9-95a1-4a11-9526-8b950ad46c1b 00:18:57.013 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:57.013 { 00:18:57.013 "name": "e5c5ddd9-95a1-4a11-9526-8b950ad46c1b", 00:18:57.013 "aliases": [ 00:18:57.013 "lvs/nvme0n1p0" 00:18:57.013 ], 00:18:57.013 "product_name": "Logical Volume", 00:18:57.013 "block_size": 4096, 00:18:57.013 "num_blocks": 26476544, 00:18:57.013 "uuid": "e5c5ddd9-95a1-4a11-9526-8b950ad46c1b", 00:18:57.013 "assigned_rate_limits": { 00:18:57.014 "rw_ios_per_sec": 0, 00:18:57.014 "rw_mbytes_per_sec": 0, 00:18:57.014 "r_mbytes_per_sec": 0, 00:18:57.014 "w_mbytes_per_sec": 0 00:18:57.014 }, 00:18:57.014 "claimed": false, 00:18:57.014 "zoned": false, 00:18:57.014 "supported_io_types": { 00:18:57.014 "read": true, 00:18:57.014 "write": true, 00:18:57.014 "unmap": true, 00:18:57.014 "flush": false, 00:18:57.014 "reset": true, 00:18:57.014 "nvme_admin": false, 00:18:57.014 "nvme_io": false, 00:18:57.014 "nvme_io_md": false, 00:18:57.014 "write_zeroes": true, 00:18:57.014 "zcopy": false, 00:18:57.014 "get_zone_info": false, 00:18:57.014 "zone_management": false, 00:18:57.014 "zone_append": false, 00:18:57.014 "compare": false, 00:18:57.014 "compare_and_write": false, 00:18:57.014 "abort": false, 00:18:57.014 "seek_hole": true, 00:18:57.014 "seek_data": true, 00:18:57.014 "copy": false, 00:18:57.014 "nvme_iov_md": false 00:18:57.014 }, 00:18:57.014 "driver_specific": { 00:18:57.014 "lvol": { 00:18:57.014 "lvol_store_uuid": "6dae33e5-23ba-4182-a602-bfbd5f907e6b", 00:18:57.014 "base_bdev": "nvme0n1", 00:18:57.014 "thin_provision": true, 00:18:57.014 "num_allocated_clusters": 0, 00:18:57.014 "snapshot": false, 00:18:57.014 "clone": false, 00:18:57.014 "esnap_clone": false 00:18:57.014 } 00:18:57.014 } 00:18:57.014 } 00:18:57.014 ]' 00:18:57.014 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:57.014 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:57.014 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:57.014 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:57.014 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:57.014 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:57.014 01:18:30 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:18:57.014 01:18:30 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:57.273 01:18:30 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:57.273 01:18:30 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:57.273 01:18:30 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size e5c5ddd9-95a1-4a11-9526-8b950ad46c1b 00:18:57.273 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=e5c5ddd9-95a1-4a11-9526-8b950ad46c1b 00:18:57.273 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:57.273 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:57.273 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:57.273 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e5c5ddd9-95a1-4a11-9526-8b950ad46c1b 00:18:57.531 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:57.531 { 00:18:57.531 "name": "e5c5ddd9-95a1-4a11-9526-8b950ad46c1b", 00:18:57.531 "aliases": [ 00:18:57.531 "lvs/nvme0n1p0" 00:18:57.531 ], 00:18:57.531 "product_name": "Logical Volume", 00:18:57.531 "block_size": 4096, 00:18:57.531 "num_blocks": 26476544, 00:18:57.531 "uuid": "e5c5ddd9-95a1-4a11-9526-8b950ad46c1b", 00:18:57.531 "assigned_rate_limits": { 00:18:57.531 "rw_ios_per_sec": 0, 00:18:57.531 "rw_mbytes_per_sec": 0, 00:18:57.531 "r_mbytes_per_sec": 0, 00:18:57.531 "w_mbytes_per_sec": 0 00:18:57.531 }, 00:18:57.531 "claimed": false, 00:18:57.531 "zoned": false, 00:18:57.531 "supported_io_types": { 00:18:57.531 "read": true, 00:18:57.531 "write": true, 00:18:57.531 "unmap": true, 00:18:57.531 "flush": false, 00:18:57.531 "reset": true, 00:18:57.531 "nvme_admin": false, 00:18:57.531 "nvme_io": false, 00:18:57.531 "nvme_io_md": false, 00:18:57.531 "write_zeroes": true, 00:18:57.531 "zcopy": false, 00:18:57.531 "get_zone_info": false, 00:18:57.531 "zone_management": false, 00:18:57.531 "zone_append": false, 00:18:57.531 "compare": false, 00:18:57.531 "compare_and_write": false, 00:18:57.531 "abort": false, 00:18:57.531 "seek_hole": true, 00:18:57.531 "seek_data": true, 00:18:57.531 "copy": false, 00:18:57.531 "nvme_iov_md": false 00:18:57.531 }, 00:18:57.531 "driver_specific": { 00:18:57.531 "lvol": { 00:18:57.531 "lvol_store_uuid": "6dae33e5-23ba-4182-a602-bfbd5f907e6b", 00:18:57.531 "base_bdev": "nvme0n1", 00:18:57.531 "thin_provision": true, 00:18:57.531 "num_allocated_clusters": 0, 00:18:57.531 "snapshot": false, 00:18:57.531 "clone": false, 00:18:57.531 "esnap_clone": false 00:18:57.531 } 00:18:57.531 } 00:18:57.531 } 00:18:57.531 ]' 00:18:57.531 01:18:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:57.531 01:18:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:57.531 01:18:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:57.531 01:18:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:57.532 01:18:31 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:57.532 01:18:31 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:57.532 01:18:31 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:57.532 01:18:31 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e5c5ddd9-95a1-4a11-9526-8b950ad46c1b -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:57.792 [2024-12-14 01:18:31.229872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.792 [2024-12-14 01:18:31.230016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:57.792 [2024-12-14 01:18:31.230034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:57.792 [2024-12-14 01:18:31.230047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.792 [2024-12-14 01:18:31.232463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.792 [2024-12-14 01:18:31.232498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:57.792 [2024-12-14 01:18:31.232508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.377 ms 00:18:57.792 [2024-12-14 01:18:31.232528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.792 [2024-12-14 01:18:31.232634] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:57.792 [2024-12-14 01:18:31.232885] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:57.792 [2024-12-14 01:18:31.232912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.792 [2024-12-14 01:18:31.232922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:57.792 [2024-12-14 01:18:31.232932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:18:57.792 [2024-12-14 01:18:31.232942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.792 [2024-12-14 01:18:31.233050] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6143b464-b06c-4534-b843-f22084791cb4 00:18:57.792 [2024-12-14 01:18:31.234137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.792 [2024-12-14 01:18:31.234176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:57.792 [2024-12-14 01:18:31.234187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:57.792 [2024-12-14 01:18:31.234195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.792 [2024-12-14 01:18:31.239644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.792 [2024-12-14 01:18:31.239678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:57.792 [2024-12-14 01:18:31.239697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.361 ms 00:18:57.792 [2024-12-14 01:18:31.239706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.792 [2024-12-14 01:18:31.239804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.792 [2024-12-14 01:18:31.239814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:57.792 [2024-12-14 01:18:31.239823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:57.792 [2024-12-14 01:18:31.239834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.792 [2024-12-14 01:18:31.239881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.792 [2024-12-14 01:18:31.239890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:57.792 [2024-12-14 01:18:31.239899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:57.792 [2024-12-14 01:18:31.239906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.792 [2024-12-14 01:18:31.239941] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:57.792 [2024-12-14 01:18:31.241366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.792 [2024-12-14 01:18:31.241492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:57.792 [2024-12-14 01:18:31.241508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.431 ms 00:18:57.792 [2024-12-14 01:18:31.241517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.792 [2024-12-14 01:18:31.241564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.792 [2024-12-14 01:18:31.241574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:57.792 [2024-12-14 01:18:31.241582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:57.792 [2024-12-14 01:18:31.241592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.792 [2024-12-14 01:18:31.241634] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:57.792 [2024-12-14 01:18:31.241803] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:57.792 [2024-12-14 01:18:31.241815] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:57.792 [2024-12-14 01:18:31.241837] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:57.792 [2024-12-14 01:18:31.241847] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:57.792 [2024-12-14 01:18:31.241857] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:57.792 [2024-12-14 01:18:31.241864] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:57.792 [2024-12-14 01:18:31.241873] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:57.792 [2024-12-14 01:18:31.241888] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:57.792 [2024-12-14 01:18:31.241897] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:57.792 [2024-12-14 01:18:31.241906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.792 [2024-12-14 01:18:31.241915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:57.792 [2024-12-14 01:18:31.241923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:18:57.792 [2024-12-14 01:18:31.241931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.792 [2024-12-14 01:18:31.242033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.792 [2024-12-14 01:18:31.242044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:57.792 [2024-12-14 01:18:31.242051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:57.792 [2024-12-14 01:18:31.242060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.792 [2024-12-14 01:18:31.242197] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:57.792 [2024-12-14 01:18:31.242211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:57.792 [2024-12-14 01:18:31.242220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:57.792 [2024-12-14 01:18:31.242230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:57.792 [2024-12-14 01:18:31.242238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:57.792 [2024-12-14 01:18:31.242248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:57.792 [2024-12-14 01:18:31.242256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:57.792 [2024-12-14 01:18:31.242265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:57.792 [2024-12-14 01:18:31.242272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:57.792 [2024-12-14 01:18:31.242281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:57.792 [2024-12-14 01:18:31.242289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:57.792 [2024-12-14 01:18:31.242298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:57.792 [2024-12-14 01:18:31.242306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:57.792 [2024-12-14 01:18:31.242317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:57.792 [2024-12-14 01:18:31.242325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:57.792 [2024-12-14 01:18:31.242334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:57.792 [2024-12-14 01:18:31.242341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:57.792 [2024-12-14 01:18:31.242352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:57.792 [2024-12-14 01:18:31.242359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:57.792 [2024-12-14 01:18:31.242368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:57.792 [2024-12-14 01:18:31.242376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:57.792 [2024-12-14 01:18:31.242395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:57.792 [2024-12-14 01:18:31.242403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:57.792 [2024-12-14 01:18:31.242413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:57.792 [2024-12-14 01:18:31.242421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:57.792 [2024-12-14 01:18:31.242430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:57.792 [2024-12-14 01:18:31.242437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:57.792 [2024-12-14 01:18:31.242448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:57.793 [2024-12-14 01:18:31.242455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:57.793 [2024-12-14 01:18:31.242464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:57.793 [2024-12-14 01:18:31.242470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:57.793 [2024-12-14 01:18:31.242478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:57.793 [2024-12-14 01:18:31.242485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:57.793 [2024-12-14 01:18:31.242493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:57.793 [2024-12-14 01:18:31.242499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:57.793 [2024-12-14 01:18:31.242508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:57.793 [2024-12-14 01:18:31.242514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:57.793 [2024-12-14 01:18:31.242523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:57.793 [2024-12-14 01:18:31.242529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:57.793 [2024-12-14 01:18:31.242537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:57.793 [2024-12-14 01:18:31.242543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:57.793 [2024-12-14 01:18:31.242552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:57.793 [2024-12-14 01:18:31.242558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:57.793 [2024-12-14 01:18:31.242565] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:57.793 [2024-12-14 01:18:31.242573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:57.793 [2024-12-14 01:18:31.242583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:57.793 [2024-12-14 01:18:31.242590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:57.793 [2024-12-14 01:18:31.242599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:57.793 [2024-12-14 01:18:31.242606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:57.793 [2024-12-14 01:18:31.242615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:57.793 [2024-12-14 01:18:31.242633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:57.793 [2024-12-14 01:18:31.242642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:57.793 [2024-12-14 01:18:31.242649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:57.793 [2024-12-14 01:18:31.242658] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:57.793 [2024-12-14 01:18:31.242667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:57.793 [2024-12-14 01:18:31.242677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:57.793 [2024-12-14 01:18:31.242684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:57.793 [2024-12-14 01:18:31.242692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:57.793 [2024-12-14 01:18:31.242700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:57.793 [2024-12-14 01:18:31.242710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:57.793 [2024-12-14 01:18:31.242717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:57.793 [2024-12-14 01:18:31.242727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:57.793 [2024-12-14 01:18:31.242734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:57.793 [2024-12-14 01:18:31.242743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:57.793 [2024-12-14 01:18:31.242751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:57.793 [2024-12-14 01:18:31.242759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:57.793 [2024-12-14 01:18:31.242766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:57.793 [2024-12-14 01:18:31.242775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:57.793 [2024-12-14 01:18:31.242782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:57.793 [2024-12-14 01:18:31.242791] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:57.793 [2024-12-14 01:18:31.242799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:57.793 [2024-12-14 01:18:31.242810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:57.793 [2024-12-14 01:18:31.242817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:57.793 [2024-12-14 01:18:31.242825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:57.793 [2024-12-14 01:18:31.242833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:57.793 [2024-12-14 01:18:31.242843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.793 [2024-12-14 01:18:31.242851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:57.793 [2024-12-14 01:18:31.242861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:18:57.793 [2024-12-14 01:18:31.242867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.793 [2024-12-14 01:18:31.242956] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:57.793 [2024-12-14 01:18:31.242966] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:00.328 [2024-12-14 01:18:33.584641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.328 [2024-12-14 01:18:33.584868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:00.328 [2024-12-14 01:18:33.584941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2341.672 ms 00:19:00.328 [2024-12-14 01:18:33.584971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.328 [2024-12-14 01:18:33.593714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.328 [2024-12-14 01:18:33.593874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:00.328 [2024-12-14 01:18:33.593942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.631 ms 00:19:00.328 [2024-12-14 01:18:33.593966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.328 [2024-12-14 01:18:33.594114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.328 [2024-12-14 01:18:33.594141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:00.328 [2024-12-14 01:18:33.594204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:00.328 [2024-12-14 01:18:33.594230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.328 [2024-12-14 01:18:33.616133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.328 [2024-12-14 01:18:33.616418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:00.328 [2024-12-14 01:18:33.616641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.850 ms 00:19:00.328 [2024-12-14 01:18:33.616702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.328 [2024-12-14 01:18:33.617070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.328 [2024-12-14 01:18:33.617236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:00.328 [2024-12-14 01:18:33.617370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:00.328 [2024-12-14 01:18:33.617599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.328 [2024-12-14 01:18:33.618210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.328 [2024-12-14 01:18:33.618386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:00.328 [2024-12-14 01:18:33.618502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:19:00.328 [2024-12-14 01:18:33.618598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.328 [2024-12-14 01:18:33.619055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.328 [2024-12-14 01:18:33.619188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:00.328 [2024-12-14 01:18:33.619278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:19:00.328 [2024-12-14 01:18:33.619354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.328 [2024-12-14 01:18:33.625090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.328 [2024-12-14 01:18:33.625193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:00.328 [2024-12-14 01:18:33.625247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.681 ms 00:19:00.328 [2024-12-14 01:18:33.625281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.328 [2024-12-14 01:18:33.633587] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:00.328 [2024-12-14 01:18:33.648278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.328 [2024-12-14 01:18:33.648389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:00.328 [2024-12-14 01:18:33.648437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.892 ms 00:19:00.328 [2024-12-14 01:18:33.648462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.328 [2024-12-14 01:18:33.706375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.328 [2024-12-14 01:18:33.706505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:00.328 [2024-12-14 01:18:33.706562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.819 ms 00:19:00.328 [2024-12-14 01:18:33.706591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.328 [2024-12-14 01:18:33.706783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.328 [2024-12-14 01:18:33.706796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:00.328 [2024-12-14 01:18:33.706805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:19:00.328 [2024-12-14 01:18:33.706814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.328 [2024-12-14 01:18:33.710192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.328 [2024-12-14 01:18:33.710229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:00.328 [2024-12-14 01:18:33.710240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.341 ms 00:19:00.328 [2024-12-14 01:18:33.710250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.328 [2024-12-14 01:18:33.712873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.328 [2024-12-14 01:18:33.712908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:00.328 [2024-12-14 01:18:33.712918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.591 ms 00:19:00.328 [2024-12-14 01:18:33.712928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.328 [2024-12-14 01:18:33.713227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.328 [2024-12-14 01:18:33.713243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:00.328 [2024-12-14 01:18:33.713252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:19:00.328 [2024-12-14 01:18:33.713262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.328 [2024-12-14 01:18:33.740895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.328 [2024-12-14 01:18:33.741017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:00.328 [2024-12-14 01:18:33.741034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.600 ms 00:19:00.328 [2024-12-14 01:18:33.741044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.328 [2024-12-14 01:18:33.745077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.328 [2024-12-14 01:18:33.745119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:00.328 [2024-12-14 01:18:33.745130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.951 ms 00:19:00.328 [2024-12-14 01:18:33.745140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.328 [2024-12-14 01:18:33.748296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.328 [2024-12-14 01:18:33.748331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:00.328 [2024-12-14 01:18:33.748341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.095 ms 00:19:00.328 [2024-12-14 01:18:33.748351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.328 [2024-12-14 01:18:33.751904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.328 [2024-12-14 01:18:33.751942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:00.328 [2024-12-14 01:18:33.751951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.503 ms 00:19:00.328 [2024-12-14 01:18:33.751962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.328 [2024-12-14 01:18:33.752019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.328 [2024-12-14 01:18:33.752030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:00.328 [2024-12-14 01:18:33.752038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:00.328 [2024-12-14 01:18:33.752047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.328 [2024-12-14 01:18:33.752122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.328 [2024-12-14 01:18:33.752133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:00.329 [2024-12-14 01:18:33.752140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:00.329 [2024-12-14 01:18:33.752149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.329 [2024-12-14 01:18:33.752995] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:00.329 [2024-12-14 01:18:33.754001] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2522.855 ms, result 0 00:19:00.329 [2024-12-14 01:18:33.754755] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:00.329 { 00:19:00.329 "name": "ftl0", 00:19:00.329 "uuid": "6143b464-b06c-4534-b843-f22084791cb4" 00:19:00.329 } 00:19:00.329 01:18:33 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:00.329 01:18:33 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:00.329 01:18:33 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:00.329 01:18:33 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:00.329 01:18:33 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:00.329 01:18:33 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:00.329 01:18:33 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:00.588 01:18:33 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:00.588 [ 00:19:00.588 { 00:19:00.588 "name": "ftl0", 00:19:00.588 "aliases": [ 00:19:00.588 "6143b464-b06c-4534-b843-f22084791cb4" 00:19:00.588 ], 00:19:00.588 "product_name": "FTL disk", 00:19:00.588 "block_size": 4096, 00:19:00.588 "num_blocks": 23592960, 00:19:00.588 "uuid": "6143b464-b06c-4534-b843-f22084791cb4", 00:19:00.588 "assigned_rate_limits": { 00:19:00.588 "rw_ios_per_sec": 0, 00:19:00.588 "rw_mbytes_per_sec": 0, 00:19:00.588 "r_mbytes_per_sec": 0, 00:19:00.588 "w_mbytes_per_sec": 0 00:19:00.588 }, 00:19:00.588 "claimed": false, 00:19:00.588 "zoned": false, 00:19:00.588 "supported_io_types": { 00:19:00.588 "read": true, 00:19:00.588 "write": true, 00:19:00.588 "unmap": true, 00:19:00.588 "flush": true, 00:19:00.588 "reset": false, 00:19:00.588 "nvme_admin": false, 00:19:00.588 "nvme_io": false, 00:19:00.588 "nvme_io_md": false, 00:19:00.588 "write_zeroes": true, 00:19:00.588 "zcopy": false, 00:19:00.588 "get_zone_info": false, 00:19:00.588 "zone_management": false, 00:19:00.588 "zone_append": false, 00:19:00.588 "compare": false, 00:19:00.588 "compare_and_write": false, 00:19:00.588 "abort": false, 00:19:00.588 "seek_hole": false, 00:19:00.588 "seek_data": false, 00:19:00.588 "copy": false, 00:19:00.588 "nvme_iov_md": false 00:19:00.588 }, 00:19:00.588 "driver_specific": { 00:19:00.588 "ftl": { 00:19:00.588 "base_bdev": "e5c5ddd9-95a1-4a11-9526-8b950ad46c1b", 00:19:00.588 "cache": "nvc0n1p0" 00:19:00.588 } 00:19:00.588 } 00:19:00.588 } 00:19:00.588 ] 00:19:00.588 01:18:34 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:00.588 01:18:34 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:00.588 01:18:34 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:00.848 01:18:34 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:00.848 01:18:34 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:01.107 01:18:34 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:01.107 { 00:19:01.107 "name": "ftl0", 00:19:01.107 "aliases": [ 00:19:01.107 "6143b464-b06c-4534-b843-f22084791cb4" 00:19:01.107 ], 00:19:01.107 "product_name": "FTL disk", 00:19:01.107 "block_size": 4096, 00:19:01.107 "num_blocks": 23592960, 00:19:01.107 "uuid": "6143b464-b06c-4534-b843-f22084791cb4", 00:19:01.107 "assigned_rate_limits": { 00:19:01.107 "rw_ios_per_sec": 0, 00:19:01.107 "rw_mbytes_per_sec": 0, 00:19:01.107 "r_mbytes_per_sec": 0, 00:19:01.107 "w_mbytes_per_sec": 0 00:19:01.107 }, 00:19:01.107 "claimed": false, 00:19:01.107 "zoned": false, 00:19:01.107 "supported_io_types": { 00:19:01.107 "read": true, 00:19:01.107 "write": true, 00:19:01.107 "unmap": true, 00:19:01.107 "flush": true, 00:19:01.107 "reset": false, 00:19:01.107 "nvme_admin": false, 00:19:01.107 "nvme_io": false, 00:19:01.107 "nvme_io_md": false, 00:19:01.107 "write_zeroes": true, 00:19:01.107 "zcopy": false, 00:19:01.107 "get_zone_info": false, 00:19:01.107 "zone_management": false, 00:19:01.107 "zone_append": false, 00:19:01.107 "compare": false, 00:19:01.107 "compare_and_write": false, 00:19:01.107 "abort": false, 00:19:01.107 "seek_hole": false, 00:19:01.107 "seek_data": false, 00:19:01.107 "copy": false, 00:19:01.107 "nvme_iov_md": false 00:19:01.107 }, 00:19:01.107 "driver_specific": { 00:19:01.107 "ftl": { 00:19:01.107 "base_bdev": "e5c5ddd9-95a1-4a11-9526-8b950ad46c1b", 00:19:01.107 "cache": "nvc0n1p0" 00:19:01.107 } 00:19:01.107 } 00:19:01.107 } 00:19:01.107 ]' 00:19:01.107 01:18:34 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:01.107 01:18:34 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:01.107 01:18:34 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:01.367 [2024-12-14 01:18:34.781420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.367 [2024-12-14 01:18:34.781459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:01.367 [2024-12-14 01:18:34.781470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:01.367 [2024-12-14 01:18:34.781476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.367 [2024-12-14 01:18:34.781509] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:01.367 [2024-12-14 01:18:34.781941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.367 [2024-12-14 01:18:34.781958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:01.367 [2024-12-14 01:18:34.781976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:19:01.367 [2024-12-14 01:18:34.781993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.367 [2024-12-14 01:18:34.782493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.367 [2024-12-14 01:18:34.782518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:01.367 [2024-12-14 01:18:34.782525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:19:01.367 [2024-12-14 01:18:34.782533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.367 [2024-12-14 01:18:34.785245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.367 [2024-12-14 01:18:34.785266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:01.367 [2024-12-14 01:18:34.785273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.685 ms 00:19:01.367 [2024-12-14 01:18:34.785281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.368 [2024-12-14 01:18:34.790395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.368 [2024-12-14 01:18:34.790426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:01.368 [2024-12-14 01:18:34.790435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.078 ms 00:19:01.368 [2024-12-14 01:18:34.790444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.368 [2024-12-14 01:18:34.791871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.368 [2024-12-14 01:18:34.791904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:01.368 [2024-12-14 01:18:34.791912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.361 ms 00:19:01.368 [2024-12-14 01:18:34.791919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.368 [2024-12-14 01:18:34.796388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.368 [2024-12-14 01:18:34.796421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:01.368 [2024-12-14 01:18:34.796429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.432 ms 00:19:01.368 [2024-12-14 01:18:34.796436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.368 [2024-12-14 01:18:34.796593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.368 [2024-12-14 01:18:34.796601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:01.368 [2024-12-14 01:18:34.796608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:19:01.368 [2024-12-14 01:18:34.796615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.368 [2024-12-14 01:18:34.798483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.368 [2024-12-14 01:18:34.798514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:01.368 [2024-12-14 01:18:34.798521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.835 ms 00:19:01.368 [2024-12-14 01:18:34.798530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.368 [2024-12-14 01:18:34.799951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.368 [2024-12-14 01:18:34.799983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:01.368 [2024-12-14 01:18:34.799990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.374 ms 00:19:01.368 [2024-12-14 01:18:34.799997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.368 [2024-12-14 01:18:34.801190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.368 [2024-12-14 01:18:34.801221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:01.368 [2024-12-14 01:18:34.801228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.154 ms 00:19:01.368 [2024-12-14 01:18:34.801235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.368 [2024-12-14 01:18:34.802267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.368 [2024-12-14 01:18:34.802299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:01.368 [2024-12-14 01:18:34.802306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:19:01.368 [2024-12-14 01:18:34.802313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.368 [2024-12-14 01:18:34.802350] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:01.368 [2024-12-14 01:18:34.802363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.802995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.803001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.803008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.803014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.803020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.803026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.803035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.803041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:01.368 [2024-12-14 01:18:34.803054] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:01.369 [2024-12-14 01:18:34.803060] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6143b464-b06c-4534-b843-f22084791cb4 00:19:01.369 [2024-12-14 01:18:34.803067] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:01.369 [2024-12-14 01:18:34.803073] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:01.369 [2024-12-14 01:18:34.803083] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:01.369 [2024-12-14 01:18:34.803089] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:01.369 [2024-12-14 01:18:34.803095] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:01.369 [2024-12-14 01:18:34.803101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:01.369 [2024-12-14 01:18:34.803108] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:01.369 [2024-12-14 01:18:34.803112] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:01.369 [2024-12-14 01:18:34.803118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:01.369 [2024-12-14 01:18:34.803124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.369 [2024-12-14 01:18:34.803131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:01.369 [2024-12-14 01:18:34.803137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:19:01.369 [2024-12-14 01:18:34.803145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.369 [2024-12-14 01:18:34.804567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.369 [2024-12-14 01:18:34.804588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:01.369 [2024-12-14 01:18:34.804596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.383 ms 00:19:01.369 [2024-12-14 01:18:34.804603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.369 [2024-12-14 01:18:34.804689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.369 [2024-12-14 01:18:34.804697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:01.369 [2024-12-14 01:18:34.804704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:01.369 [2024-12-14 01:18:34.804711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.369 [2024-12-14 01:18:34.809615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.369 [2024-12-14 01:18:34.809654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:01.369 [2024-12-14 01:18:34.809671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.369 [2024-12-14 01:18:34.809681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.369 [2024-12-14 01:18:34.809749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.369 [2024-12-14 01:18:34.809758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:01.369 [2024-12-14 01:18:34.809765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.369 [2024-12-14 01:18:34.809773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.369 [2024-12-14 01:18:34.809821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.369 [2024-12-14 01:18:34.809832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:01.369 [2024-12-14 01:18:34.809838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.369 [2024-12-14 01:18:34.809845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.369 [2024-12-14 01:18:34.809879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.369 [2024-12-14 01:18:34.809887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:01.369 [2024-12-14 01:18:34.809893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.369 [2024-12-14 01:18:34.809900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.369 [2024-12-14 01:18:34.818804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.369 [2024-12-14 01:18:34.818845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:01.369 [2024-12-14 01:18:34.818853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.369 [2024-12-14 01:18:34.818861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.369 [2024-12-14 01:18:34.826108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.369 [2024-12-14 01:18:34.826147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:01.369 [2024-12-14 01:18:34.826155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.369 [2024-12-14 01:18:34.826165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.369 [2024-12-14 01:18:34.826210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.369 [2024-12-14 01:18:34.826219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:01.369 [2024-12-14 01:18:34.826227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.369 [2024-12-14 01:18:34.826234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.369 [2024-12-14 01:18:34.826282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.369 [2024-12-14 01:18:34.826290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:01.369 [2024-12-14 01:18:34.826295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.369 [2024-12-14 01:18:34.826302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.369 [2024-12-14 01:18:34.826380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.369 [2024-12-14 01:18:34.826389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:01.369 [2024-12-14 01:18:34.826395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.369 [2024-12-14 01:18:34.826404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.369 [2024-12-14 01:18:34.826449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.369 [2024-12-14 01:18:34.826457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:01.369 [2024-12-14 01:18:34.826463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.369 [2024-12-14 01:18:34.826471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.369 [2024-12-14 01:18:34.826519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.369 [2024-12-14 01:18:34.826527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:01.369 [2024-12-14 01:18:34.826533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.369 [2024-12-14 01:18:34.826541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.369 [2024-12-14 01:18:34.826589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.369 [2024-12-14 01:18:34.826597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:01.369 [2024-12-14 01:18:34.826604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.369 [2024-12-14 01:18:34.826610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.369 [2024-12-14 01:18:34.826777] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 45.350 ms, result 0 00:19:01.369 true 00:19:01.369 01:18:34 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 88928 00:19:01.369 01:18:34 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 88928 ']' 00:19:01.369 01:18:34 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 88928 00:19:01.369 01:18:34 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:01.369 01:18:34 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.369 01:18:34 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88928 00:19:01.369 killing process with pid 88928 00:19:01.369 01:18:34 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:01.369 01:18:34 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:01.369 01:18:34 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88928' 00:19:01.369 01:18:34 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 88928 00:19:01.369 01:18:34 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 88928 00:19:06.640 01:18:39 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:06.900 65536+0 records in 00:19:06.900 65536+0 records out 00:19:06.900 268435456 bytes (268 MB, 256 MiB) copied, 0.817582 s, 328 MB/s 00:19:06.900 01:18:40 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:06.900 [2024-12-14 01:18:40.483870] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:06.900 [2024-12-14 01:18:40.483996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89083 ] 00:19:07.162 [2024-12-14 01:18:40.616468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.162 [2024-12-14 01:18:40.639018] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.162 [2024-12-14 01:18:40.725219] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:07.162 [2024-12-14 01:18:40.725278] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:07.425 [2024-12-14 01:18:40.871721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.425 [2024-12-14 01:18:40.871766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:07.425 [2024-12-14 01:18:40.871779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:07.425 [2024-12-14 01:18:40.871787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.425 [2024-12-14 01:18:40.874318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.425 [2024-12-14 01:18:40.874366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:07.425 [2024-12-14 01:18:40.874377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.514 ms 00:19:07.425 [2024-12-14 01:18:40.874388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.425 [2024-12-14 01:18:40.874492] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:07.425 [2024-12-14 01:18:40.874738] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:07.425 [2024-12-14 01:18:40.874756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.425 [2024-12-14 01:18:40.874767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:07.425 [2024-12-14 01:18:40.874776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:19:07.425 [2024-12-14 01:18:40.874783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.425 [2024-12-14 01:18:40.875889] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:07.425 [2024-12-14 01:18:40.878923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.425 [2024-12-14 01:18:40.878959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:07.425 [2024-12-14 01:18:40.878971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.035 ms 00:19:07.425 [2024-12-14 01:18:40.878983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.425 [2024-12-14 01:18:40.879048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.425 [2024-12-14 01:18:40.879059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:07.425 [2024-12-14 01:18:40.879068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:07.425 [2024-12-14 01:18:40.879075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.425 [2024-12-14 01:18:40.884048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.425 [2024-12-14 01:18:40.884081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:07.425 [2024-12-14 01:18:40.884091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.934 ms 00:19:07.425 [2024-12-14 01:18:40.884104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.425 [2024-12-14 01:18:40.884204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.425 [2024-12-14 01:18:40.884217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:07.425 [2024-12-14 01:18:40.884225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:07.425 [2024-12-14 01:18:40.884238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.425 [2024-12-14 01:18:40.884262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.425 [2024-12-14 01:18:40.884270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:07.425 [2024-12-14 01:18:40.884278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:07.425 [2024-12-14 01:18:40.884284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.425 [2024-12-14 01:18:40.884304] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:07.425 [2024-12-14 01:18:40.885665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.425 [2024-12-14 01:18:40.885690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:07.425 [2024-12-14 01:18:40.885699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.366 ms 00:19:07.425 [2024-12-14 01:18:40.885709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.425 [2024-12-14 01:18:40.885758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.425 [2024-12-14 01:18:40.885768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:07.425 [2024-12-14 01:18:40.885778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:07.425 [2024-12-14 01:18:40.885785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.425 [2024-12-14 01:18:40.885802] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:07.425 [2024-12-14 01:18:40.885820] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:07.425 [2024-12-14 01:18:40.885861] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:07.425 [2024-12-14 01:18:40.885878] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:07.425 [2024-12-14 01:18:40.885978] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:07.425 [2024-12-14 01:18:40.885994] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:07.425 [2024-12-14 01:18:40.886003] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:07.426 [2024-12-14 01:18:40.886013] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:07.426 [2024-12-14 01:18:40.886021] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:07.426 [2024-12-14 01:18:40.886031] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:07.426 [2024-12-14 01:18:40.886038] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:07.426 [2024-12-14 01:18:40.886049] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:07.426 [2024-12-14 01:18:40.886055] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:07.426 [2024-12-14 01:18:40.886065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.426 [2024-12-14 01:18:40.886074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:07.426 [2024-12-14 01:18:40.886082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:19:07.426 [2024-12-14 01:18:40.886089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.426 [2024-12-14 01:18:40.886181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.426 [2024-12-14 01:18:40.886194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:07.426 [2024-12-14 01:18:40.886201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:07.426 [2024-12-14 01:18:40.886208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.426 [2024-12-14 01:18:40.886305] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:07.426 [2024-12-14 01:18:40.886320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:07.426 [2024-12-14 01:18:40.886329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:07.426 [2024-12-14 01:18:40.886337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.426 [2024-12-14 01:18:40.886346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:07.426 [2024-12-14 01:18:40.886353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:07.426 [2024-12-14 01:18:40.886361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:07.426 [2024-12-14 01:18:40.886368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:07.426 [2024-12-14 01:18:40.886378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:07.426 [2024-12-14 01:18:40.886385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:07.426 [2024-12-14 01:18:40.886393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:07.426 [2024-12-14 01:18:40.886400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:07.426 [2024-12-14 01:18:40.886407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:07.426 [2024-12-14 01:18:40.886415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:07.426 [2024-12-14 01:18:40.886422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:07.426 [2024-12-14 01:18:40.886429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.426 [2024-12-14 01:18:40.886437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:07.426 [2024-12-14 01:18:40.886444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:07.426 [2024-12-14 01:18:40.886452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.426 [2024-12-14 01:18:40.886460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:07.426 [2024-12-14 01:18:40.886468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:07.426 [2024-12-14 01:18:40.886475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:07.426 [2024-12-14 01:18:40.886482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:07.426 [2024-12-14 01:18:40.886490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:07.426 [2024-12-14 01:18:40.886500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:07.426 [2024-12-14 01:18:40.886507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:07.426 [2024-12-14 01:18:40.886515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:07.426 [2024-12-14 01:18:40.886522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:07.426 [2024-12-14 01:18:40.886529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:07.426 [2024-12-14 01:18:40.886536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:07.426 [2024-12-14 01:18:40.886543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:07.426 [2024-12-14 01:18:40.886551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:07.426 [2024-12-14 01:18:40.886558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:07.426 [2024-12-14 01:18:40.886565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:07.426 [2024-12-14 01:18:40.886573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:07.426 [2024-12-14 01:18:40.886580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:07.426 [2024-12-14 01:18:40.886587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:07.426 [2024-12-14 01:18:40.886594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:07.426 [2024-12-14 01:18:40.886602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:07.426 [2024-12-14 01:18:40.886608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.426 [2024-12-14 01:18:40.886617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:07.426 [2024-12-14 01:18:40.886638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:07.426 [2024-12-14 01:18:40.886645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.426 [2024-12-14 01:18:40.886652] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:07.426 [2024-12-14 01:18:40.886661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:07.426 [2024-12-14 01:18:40.886669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:07.426 [2024-12-14 01:18:40.886677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.426 [2024-12-14 01:18:40.886689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:07.426 [2024-12-14 01:18:40.886697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:07.426 [2024-12-14 01:18:40.886704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:07.426 [2024-12-14 01:18:40.886713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:07.426 [2024-12-14 01:18:40.886721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:07.426 [2024-12-14 01:18:40.886729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:07.426 [2024-12-14 01:18:40.886737] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:07.426 [2024-12-14 01:18:40.886748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:07.426 [2024-12-14 01:18:40.886757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:07.426 [2024-12-14 01:18:40.886765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:07.426 [2024-12-14 01:18:40.886773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:07.426 [2024-12-14 01:18:40.886779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:07.426 [2024-12-14 01:18:40.886786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:07.426 [2024-12-14 01:18:40.886793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:07.426 [2024-12-14 01:18:40.886800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:07.426 [2024-12-14 01:18:40.886811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:07.426 [2024-12-14 01:18:40.886818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:07.426 [2024-12-14 01:18:40.886824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:07.426 [2024-12-14 01:18:40.886831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:07.426 [2024-12-14 01:18:40.886838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:07.426 [2024-12-14 01:18:40.886845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:07.426 [2024-12-14 01:18:40.886852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:07.426 [2024-12-14 01:18:40.886858] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:07.426 [2024-12-14 01:18:40.886869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:07.426 [2024-12-14 01:18:40.886876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:07.426 [2024-12-14 01:18:40.886885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:07.426 [2024-12-14 01:18:40.886892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:07.426 [2024-12-14 01:18:40.886899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:07.426 [2024-12-14 01:18:40.886906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.426 [2024-12-14 01:18:40.886916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:07.426 [2024-12-14 01:18:40.886923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:19:07.427 [2024-12-14 01:18:40.886935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.427 [2024-12-14 01:18:40.895755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.427 [2024-12-14 01:18:40.895788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:07.427 [2024-12-14 01:18:40.895797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.768 ms 00:19:07.427 [2024-12-14 01:18:40.895809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.427 [2024-12-14 01:18:40.895922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.427 [2024-12-14 01:18:40.895935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:07.427 [2024-12-14 01:18:40.895943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:07.427 [2024-12-14 01:18:40.895950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.427 [2024-12-14 01:18:40.915046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.427 [2024-12-14 01:18:40.915265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:07.427 [2024-12-14 01:18:40.915295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.073 ms 00:19:07.427 [2024-12-14 01:18:40.915310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.427 [2024-12-14 01:18:40.915443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.427 [2024-12-14 01:18:40.915463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:07.427 [2024-12-14 01:18:40.915479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:07.427 [2024-12-14 01:18:40.915492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.427 [2024-12-14 01:18:40.915938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.427 [2024-12-14 01:18:40.915975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:07.427 [2024-12-14 01:18:40.915993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:19:07.427 [2024-12-14 01:18:40.916008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.427 [2024-12-14 01:18:40.916226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.427 [2024-12-14 01:18:40.916257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:07.427 [2024-12-14 01:18:40.916277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:19:07.427 [2024-12-14 01:18:40.916290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.427 [2024-12-14 01:18:40.923169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.427 [2024-12-14 01:18:40.923208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:07.427 [2024-12-14 01:18:40.923217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.845 ms 00:19:07.427 [2024-12-14 01:18:40.923225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.427 [2024-12-14 01:18:40.926104] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:07.427 [2024-12-14 01:18:40.926227] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:07.427 [2024-12-14 01:18:40.926242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.427 [2024-12-14 01:18:40.926249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:07.427 [2024-12-14 01:18:40.926257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.917 ms 00:19:07.427 [2024-12-14 01:18:40.926264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.427 [2024-12-14 01:18:40.940954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.427 [2024-12-14 01:18:40.941074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:07.427 [2024-12-14 01:18:40.941090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.637 ms 00:19:07.427 [2024-12-14 01:18:40.941098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.427 [2024-12-14 01:18:40.943428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.427 [2024-12-14 01:18:40.943462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:07.427 [2024-12-14 01:18:40.943471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.263 ms 00:19:07.427 [2024-12-14 01:18:40.943477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.427 [2024-12-14 01:18:40.945437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.427 [2024-12-14 01:18:40.945469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:07.427 [2024-12-14 01:18:40.945484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.920 ms 00:19:07.427 [2024-12-14 01:18:40.945491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.427 [2024-12-14 01:18:40.945832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.427 [2024-12-14 01:18:40.945844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:07.427 [2024-12-14 01:18:40.945856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:19:07.427 [2024-12-14 01:18:40.945863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.427 [2024-12-14 01:18:40.963451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.427 [2024-12-14 01:18:40.963502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:07.427 [2024-12-14 01:18:40.963515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.566 ms 00:19:07.427 [2024-12-14 01:18:40.963523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.427 [2024-12-14 01:18:40.971311] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:07.427 [2024-12-14 01:18:40.987007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.427 [2024-12-14 01:18:40.987049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:07.427 [2024-12-14 01:18:40.987072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.359 ms 00:19:07.427 [2024-12-14 01:18:40.987084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.427 [2024-12-14 01:18:40.987169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.427 [2024-12-14 01:18:40.987181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:07.427 [2024-12-14 01:18:40.987190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:07.427 [2024-12-14 01:18:40.987201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.427 [2024-12-14 01:18:40.987251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.427 [2024-12-14 01:18:40.987261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:07.427 [2024-12-14 01:18:40.987269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:07.427 [2024-12-14 01:18:40.987277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.427 [2024-12-14 01:18:40.987299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.427 [2024-12-14 01:18:40.987307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:07.427 [2024-12-14 01:18:40.987315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:07.427 [2024-12-14 01:18:40.987323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.427 [2024-12-14 01:18:40.987359] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:07.427 [2024-12-14 01:18:40.987369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.427 [2024-12-14 01:18:40.987377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:07.427 [2024-12-14 01:18:40.987385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:07.427 [2024-12-14 01:18:40.987392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.427 [2024-12-14 01:18:40.992026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.427 [2024-12-14 01:18:40.992065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:07.427 [2024-12-14 01:18:40.992077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.615 ms 00:19:07.427 [2024-12-14 01:18:40.992085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.427 [2024-12-14 01:18:40.992166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.427 [2024-12-14 01:18:40.992176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:07.427 [2024-12-14 01:18:40.992185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:07.427 [2024-12-14 01:18:40.992197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.427 [2024-12-14 01:18:40.993069] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:07.427 [2024-12-14 01:18:40.994213] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 121.048 ms, result 0 00:19:07.427 [2024-12-14 01:18:40.994961] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:07.427 [2024-12-14 01:18:41.004147] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:08.817  [2024-12-14T01:18:43.420Z] Copying: 14/256 [MB] (14 MBps) [2024-12-14T01:18:44.369Z] Copying: 38/256 [MB] (24 MBps) [2024-12-14T01:18:45.316Z] Copying: 54/256 [MB] (15 MBps) [2024-12-14T01:18:46.263Z] Copying: 71/256 [MB] (17 MBps) [2024-12-14T01:18:47.208Z] Copying: 90/256 [MB] (18 MBps) [2024-12-14T01:18:48.153Z] Copying: 108/256 [MB] (18 MBps) [2024-12-14T01:18:49.099Z] Copying: 127/256 [MB] (18 MBps) [2024-12-14T01:18:50.042Z] Copying: 145/256 [MB] (18 MBps) [2024-12-14T01:18:51.429Z] Copying: 161/256 [MB] (15 MBps) [2024-12-14T01:18:52.369Z] Copying: 177/256 [MB] (16 MBps) [2024-12-14T01:18:53.312Z] Copying: 209/256 [MB] (31 MBps) [2024-12-14T01:18:53.884Z] Copying: 237/256 [MB] (28 MBps) [2024-12-14T01:18:53.884Z] Copying: 256/256 [MB] (average 20 MBps)[2024-12-14 01:18:53.801091] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:20.272 [2024-12-14 01:18:53.802095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.272 [2024-12-14 01:18:53.802128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:20.272 [2024-12-14 01:18:53.802138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:20.272 [2024-12-14 01:18:53.802145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.272 [2024-12-14 01:18:53.802161] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:20.273 [2024-12-14 01:18:53.802527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.273 [2024-12-14 01:18:53.802550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:20.273 [2024-12-14 01:18:53.802558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:19:20.273 [2024-12-14 01:18:53.802564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.273 [2024-12-14 01:18:53.804278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.273 [2024-12-14 01:18:53.804304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:20.273 [2024-12-14 01:18:53.804312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.698 ms 00:19:20.273 [2024-12-14 01:18:53.804321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.273 [2024-12-14 01:18:53.809565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.273 [2024-12-14 01:18:53.809592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:20.273 [2024-12-14 01:18:53.809600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.231 ms 00:19:20.273 [2024-12-14 01:18:53.809612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.273 [2024-12-14 01:18:53.814969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.273 [2024-12-14 01:18:53.814992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:20.273 [2024-12-14 01:18:53.814999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.311 ms 00:19:20.273 [2024-12-14 01:18:53.815006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.273 [2024-12-14 01:18:53.815911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.273 [2024-12-14 01:18:53.815937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:20.273 [2024-12-14 01:18:53.815944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:19:20.273 [2024-12-14 01:18:53.815950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.273 [2024-12-14 01:18:53.819209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.273 [2024-12-14 01:18:53.819242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:20.273 [2024-12-14 01:18:53.819249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.234 ms 00:19:20.273 [2024-12-14 01:18:53.819255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.273 [2024-12-14 01:18:53.819348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.273 [2024-12-14 01:18:53.819355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:20.273 [2024-12-14 01:18:53.819361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:20.273 [2024-12-14 01:18:53.819369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.273 [2024-12-14 01:18:53.821730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.273 [2024-12-14 01:18:53.821756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:20.273 [2024-12-14 01:18:53.821763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.349 ms 00:19:20.273 [2024-12-14 01:18:53.821768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.273 [2024-12-14 01:18:53.823311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.273 [2024-12-14 01:18:53.823336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:20.273 [2024-12-14 01:18:53.823344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.518 ms 00:19:20.273 [2024-12-14 01:18:53.823349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.273 [2024-12-14 01:18:53.824401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.273 [2024-12-14 01:18:53.824427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:20.273 [2024-12-14 01:18:53.824434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.029 ms 00:19:20.273 [2024-12-14 01:18:53.824439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.273 [2024-12-14 01:18:53.825302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.273 [2024-12-14 01:18:53.825329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:20.273 [2024-12-14 01:18:53.825337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:19:20.273 [2024-12-14 01:18:53.825343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.273 [2024-12-14 01:18:53.825367] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:20.273 [2024-12-14 01:18:53.825379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:20.273 [2024-12-14 01:18:53.825692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.825997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:20.274 [2024-12-14 01:18:53.826009] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:20.274 [2024-12-14 01:18:53.826015] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6143b464-b06c-4534-b843-f22084791cb4 00:19:20.274 [2024-12-14 01:18:53.826021] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:20.274 [2024-12-14 01:18:53.826026] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:20.274 [2024-12-14 01:18:53.826032] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:20.274 [2024-12-14 01:18:53.826041] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:20.274 [2024-12-14 01:18:53.826051] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:20.274 [2024-12-14 01:18:53.826057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:20.274 [2024-12-14 01:18:53.826063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:20.274 [2024-12-14 01:18:53.826068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:20.274 [2024-12-14 01:18:53.826073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:20.274 [2024-12-14 01:18:53.826078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.274 [2024-12-14 01:18:53.826088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:20.274 [2024-12-14 01:18:53.826095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:19:20.274 [2024-12-14 01:18:53.826100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.274 [2024-12-14 01:18:53.827367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.274 [2024-12-14 01:18:53.827387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:20.274 [2024-12-14 01:18:53.827396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.254 ms 00:19:20.274 [2024-12-14 01:18:53.827402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.274 [2024-12-14 01:18:53.827474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.274 [2024-12-14 01:18:53.827481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:20.274 [2024-12-14 01:18:53.827487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:20.274 [2024-12-14 01:18:53.827492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.274 [2024-12-14 01:18:53.831838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.274 [2024-12-14 01:18:53.831866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:20.274 [2024-12-14 01:18:53.831873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.274 [2024-12-14 01:18:53.831878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.274 [2024-12-14 01:18:53.831929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.274 [2024-12-14 01:18:53.831935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:20.274 [2024-12-14 01:18:53.831941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.274 [2024-12-14 01:18:53.831946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.274 [2024-12-14 01:18:53.831981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.274 [2024-12-14 01:18:53.831988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:20.274 [2024-12-14 01:18:53.831994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.274 [2024-12-14 01:18:53.831999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.274 [2024-12-14 01:18:53.832012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.275 [2024-12-14 01:18:53.832020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:20.275 [2024-12-14 01:18:53.832026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.275 [2024-12-14 01:18:53.832031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.275 [2024-12-14 01:18:53.839722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.275 [2024-12-14 01:18:53.839754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:20.275 [2024-12-14 01:18:53.839762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.275 [2024-12-14 01:18:53.839768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.275 [2024-12-14 01:18:53.845803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.275 [2024-12-14 01:18:53.845835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:20.275 [2024-12-14 01:18:53.845842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.275 [2024-12-14 01:18:53.845849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.275 [2024-12-14 01:18:53.845883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.275 [2024-12-14 01:18:53.845890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:20.275 [2024-12-14 01:18:53.845896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.275 [2024-12-14 01:18:53.845902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.275 [2024-12-14 01:18:53.845924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.275 [2024-12-14 01:18:53.845930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:20.275 [2024-12-14 01:18:53.845941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.275 [2024-12-14 01:18:53.845946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.275 [2024-12-14 01:18:53.846000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.275 [2024-12-14 01:18:53.846009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:20.275 [2024-12-14 01:18:53.846016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.275 [2024-12-14 01:18:53.846026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.275 [2024-12-14 01:18:53.846049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.275 [2024-12-14 01:18:53.846055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:20.275 [2024-12-14 01:18:53.846063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.275 [2024-12-14 01:18:53.846068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.275 [2024-12-14 01:18:53.846097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.275 [2024-12-14 01:18:53.846103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:20.275 [2024-12-14 01:18:53.846109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.275 [2024-12-14 01:18:53.846115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.275 [2024-12-14 01:18:53.846148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.275 [2024-12-14 01:18:53.846159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:20.275 [2024-12-14 01:18:53.846166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.275 [2024-12-14 01:18:53.846172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.275 [2024-12-14 01:18:53.846272] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 44.159 ms, result 0 00:19:20.846 00:19:20.846 00:19:20.846 01:18:54 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=89237 00:19:20.846 01:18:54 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 89237 00:19:20.846 01:18:54 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89237 ']' 00:19:20.846 01:18:54 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:20.846 01:18:54 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.846 01:18:54 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.846 01:18:54 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.846 01:18:54 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.846 01:18:54 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:20.846 [2024-12-14 01:18:54.359395] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:20.846 [2024-12-14 01:18:54.361697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89237 ] 00:19:21.107 [2024-12-14 01:18:54.507101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.107 [2024-12-14 01:18:54.524919] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.677 01:18:55 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.677 01:18:55 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:21.677 01:18:55 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:21.936 [2024-12-14 01:18:55.392712] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:21.936 [2024-12-14 01:18:55.392870] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:22.198 [2024-12-14 01:18:55.551725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.198 [2024-12-14 01:18:55.551853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:22.198 [2024-12-14 01:18:55.551868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:22.198 [2024-12-14 01:18:55.551877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.198 [2024-12-14 01:18:55.553609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.198 [2024-12-14 01:18:55.553654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.198 [2024-12-14 01:18:55.553662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.714 ms 00:19:22.198 [2024-12-14 01:18:55.553668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.198 [2024-12-14 01:18:55.553725] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:22.198 [2024-12-14 01:18:55.553905] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:22.198 [2024-12-14 01:18:55.553916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.198 [2024-12-14 01:18:55.553923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.198 [2024-12-14 01:18:55.553930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:19:22.198 [2024-12-14 01:18:55.553936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.198 [2024-12-14 01:18:55.555089] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:22.198 [2024-12-14 01:18:55.557147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.198 [2024-12-14 01:18:55.557174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:22.198 [2024-12-14 01:18:55.557184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.056 ms 00:19:22.198 [2024-12-14 01:18:55.557190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.198 [2024-12-14 01:18:55.557238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.198 [2024-12-14 01:18:55.557249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:22.198 [2024-12-14 01:18:55.557258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:22.198 [2024-12-14 01:18:55.557264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.198 [2024-12-14 01:18:55.561630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.198 [2024-12-14 01:18:55.561653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.198 [2024-12-14 01:18:55.561662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.317 ms 00:19:22.198 [2024-12-14 01:18:55.561668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.198 [2024-12-14 01:18:55.561756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.198 [2024-12-14 01:18:55.561764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.198 [2024-12-14 01:18:55.561772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:22.198 [2024-12-14 01:18:55.561780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.198 [2024-12-14 01:18:55.561800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.198 [2024-12-14 01:18:55.561806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:22.199 [2024-12-14 01:18:55.561815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:22.199 [2024-12-14 01:18:55.561821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.199 [2024-12-14 01:18:55.561838] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:22.199 [2024-12-14 01:18:55.563056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.199 [2024-12-14 01:18:55.563132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.199 [2024-12-14 01:18:55.563192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.222 ms 00:19:22.199 [2024-12-14 01:18:55.563212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.199 [2024-12-14 01:18:55.563252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.199 [2024-12-14 01:18:55.563270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:22.199 [2024-12-14 01:18:55.563285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:22.199 [2024-12-14 01:18:55.563337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.199 [2024-12-14 01:18:55.563366] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:22.199 [2024-12-14 01:18:55.563392] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:22.199 [2024-12-14 01:18:55.563439] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:22.199 [2024-12-14 01:18:55.563521] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:22.199 [2024-12-14 01:18:55.563639] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:22.199 [2024-12-14 01:18:55.563669] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:22.199 [2024-12-14 01:18:55.563728] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:22.199 [2024-12-14 01:18:55.563759] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:22.199 [2024-12-14 01:18:55.563783] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:22.199 [2024-12-14 01:18:55.563835] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:22.199 [2024-12-14 01:18:55.563855] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:22.199 [2024-12-14 01:18:55.563871] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:22.199 [2024-12-14 01:18:55.563887] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:22.199 [2024-12-14 01:18:55.563972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.199 [2024-12-14 01:18:55.564033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:22.199 [2024-12-14 01:18:55.564050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:19:22.199 [2024-12-14 01:18:55.564064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.199 [2024-12-14 01:18:55.564145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.199 [2024-12-14 01:18:55.564162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:22.199 [2024-12-14 01:18:55.564179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:22.199 [2024-12-14 01:18:55.564225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.199 [2024-12-14 01:18:55.564318] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:22.199 [2024-12-14 01:18:55.564336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:22.199 [2024-12-14 01:18:55.564355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.199 [2024-12-14 01:18:55.564372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.199 [2024-12-14 01:18:55.564420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:22.199 [2024-12-14 01:18:55.564437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:22.199 [2024-12-14 01:18:55.564453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:22.199 [2024-12-14 01:18:55.564467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:22.199 [2024-12-14 01:18:55.564483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:22.199 [2024-12-14 01:18:55.564520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.199 [2024-12-14 01:18:55.564539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:22.199 [2024-12-14 01:18:55.564554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:22.199 [2024-12-14 01:18:55.564569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.199 [2024-12-14 01:18:55.564583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:22.199 [2024-12-14 01:18:55.564598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:22.199 [2024-12-14 01:18:55.564653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.199 [2024-12-14 01:18:55.564673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:22.199 [2024-12-14 01:18:55.564687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:22.199 [2024-12-14 01:18:55.564703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.199 [2024-12-14 01:18:55.564717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:22.199 [2024-12-14 01:18:55.564733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:22.199 [2024-12-14 01:18:55.564747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.199 [2024-12-14 01:18:55.564789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:22.199 [2024-12-14 01:18:55.564806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:22.199 [2024-12-14 01:18:55.564822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.199 [2024-12-14 01:18:55.564837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:22.199 [2024-12-14 01:18:55.564874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:22.199 [2024-12-14 01:18:55.564882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.199 [2024-12-14 01:18:55.564891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:22.199 [2024-12-14 01:18:55.564897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:22.199 [2024-12-14 01:18:55.564903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.199 [2024-12-14 01:18:55.564908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:22.199 [2024-12-14 01:18:55.564915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:22.199 [2024-12-14 01:18:55.564920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.199 [2024-12-14 01:18:55.564926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:22.199 [2024-12-14 01:18:55.564931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:22.199 [2024-12-14 01:18:55.564939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.199 [2024-12-14 01:18:55.564944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:22.199 [2024-12-14 01:18:55.564950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:22.199 [2024-12-14 01:18:55.564955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.199 [2024-12-14 01:18:55.564961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:22.199 [2024-12-14 01:18:55.564966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:22.199 [2024-12-14 01:18:55.564973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.199 [2024-12-14 01:18:55.564978] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:22.199 [2024-12-14 01:18:55.564985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:22.199 [2024-12-14 01:18:55.564991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.199 [2024-12-14 01:18:55.564997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.199 [2024-12-14 01:18:55.565003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:22.199 [2024-12-14 01:18:55.565009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:22.199 [2024-12-14 01:18:55.565014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:22.199 [2024-12-14 01:18:55.565021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:22.199 [2024-12-14 01:18:55.565026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:22.199 [2024-12-14 01:18:55.565035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:22.199 [2024-12-14 01:18:55.565041] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:22.199 [2024-12-14 01:18:55.565050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.199 [2024-12-14 01:18:55.565057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:22.199 [2024-12-14 01:18:55.565064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:22.199 [2024-12-14 01:18:55.565069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:22.199 [2024-12-14 01:18:55.565075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:22.199 [2024-12-14 01:18:55.565081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:22.199 [2024-12-14 01:18:55.565089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:22.200 [2024-12-14 01:18:55.565095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:22.200 [2024-12-14 01:18:55.565101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:22.200 [2024-12-14 01:18:55.565107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:22.200 [2024-12-14 01:18:55.565113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:22.200 [2024-12-14 01:18:55.565119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:22.200 [2024-12-14 01:18:55.565129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:22.200 [2024-12-14 01:18:55.565134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:22.200 [2024-12-14 01:18:55.565142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:22.200 [2024-12-14 01:18:55.565148] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:22.200 [2024-12-14 01:18:55.565157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.200 [2024-12-14 01:18:55.565162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:22.200 [2024-12-14 01:18:55.565169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:22.200 [2024-12-14 01:18:55.565175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:22.200 [2024-12-14 01:18:55.565182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:22.200 [2024-12-14 01:18:55.565188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.200 [2024-12-14 01:18:55.565196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:22.200 [2024-12-14 01:18:55.565201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.923 ms 00:19:22.200 [2024-12-14 01:18:55.565208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.200 [2024-12-14 01:18:55.573090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.200 [2024-12-14 01:18:55.573179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:22.200 [2024-12-14 01:18:55.573221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.835 ms 00:19:22.200 [2024-12-14 01:18:55.573244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.200 [2024-12-14 01:18:55.573346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.200 [2024-12-14 01:18:55.573374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:22.200 [2024-12-14 01:18:55.573469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:22.200 [2024-12-14 01:18:55.573489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.200 [2024-12-14 01:18:55.580922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.200 [2024-12-14 01:18:55.581033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:22.200 [2024-12-14 01:18:55.581077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.406 ms 00:19:22.200 [2024-12-14 01:18:55.581099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.200 [2024-12-14 01:18:55.581149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.200 [2024-12-14 01:18:55.581169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:22.200 [2024-12-14 01:18:55.581184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:22.200 [2024-12-14 01:18:55.581200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.200 [2024-12-14 01:18:55.581538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.200 [2024-12-14 01:18:55.581641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:22.200 [2024-12-14 01:18:55.581692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:19:22.200 [2024-12-14 01:18:55.581712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.200 [2024-12-14 01:18:55.581827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.200 [2024-12-14 01:18:55.581848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:22.200 [2024-12-14 01:18:55.581884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:19:22.200 [2024-12-14 01:18:55.581902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.200 [2024-12-14 01:18:55.586645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.200 [2024-12-14 01:18:55.586733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:22.200 [2024-12-14 01:18:55.586775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.718 ms 00:19:22.200 [2024-12-14 01:18:55.586794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.200 [2024-12-14 01:18:55.600058] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:22.200 [2024-12-14 01:18:55.600296] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:22.200 [2024-12-14 01:18:55.600421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.200 [2024-12-14 01:18:55.600512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:22.200 [2024-12-14 01:18:55.600556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.536 ms 00:19:22.200 [2024-12-14 01:18:55.600661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.200 [2024-12-14 01:18:55.616504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.200 [2024-12-14 01:18:55.616601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:22.200 [2024-12-14 01:18:55.616655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.751 ms 00:19:22.200 [2024-12-14 01:18:55.616676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.200 [2024-12-14 01:18:55.618272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.200 [2024-12-14 01:18:55.618364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:22.200 [2024-12-14 01:18:55.618438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.532 ms 00:19:22.200 [2024-12-14 01:18:55.618460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.200 [2024-12-14 01:18:55.619685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.200 [2024-12-14 01:18:55.619772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:22.200 [2024-12-14 01:18:55.619811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.186 ms 00:19:22.200 [2024-12-14 01:18:55.619829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.200 [2024-12-14 01:18:55.620079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.200 [2024-12-14 01:18:55.620113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:22.200 [2024-12-14 01:18:55.620186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:19:22.200 [2024-12-14 01:18:55.620206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.200 [2024-12-14 01:18:55.634271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.200 [2024-12-14 01:18:55.634384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:22.200 [2024-12-14 01:18:55.634428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.037 ms 00:19:22.200 [2024-12-14 01:18:55.634449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.200 [2024-12-14 01:18:55.640161] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:22.200 [2024-12-14 01:18:55.651536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.200 [2024-12-14 01:18:55.651650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:22.200 [2024-12-14 01:18:55.651695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.035 ms 00:19:22.200 [2024-12-14 01:18:55.651712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.200 [2024-12-14 01:18:55.651800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.200 [2024-12-14 01:18:55.651823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:22.200 [2024-12-14 01:18:55.651840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:22.200 [2024-12-14 01:18:55.651855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.200 [2024-12-14 01:18:55.651906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.200 [2024-12-14 01:18:55.652040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:22.200 [2024-12-14 01:18:55.652059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:22.200 [2024-12-14 01:18:55.652073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.200 [2024-12-14 01:18:55.652100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.200 [2024-12-14 01:18:55.652116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:22.200 [2024-12-14 01:18:55.652202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:22.200 [2024-12-14 01:18:55.652224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.200 [2024-12-14 01:18:55.652264] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:22.200 [2024-12-14 01:18:55.652282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.200 [2024-12-14 01:18:55.652299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:22.200 [2024-12-14 01:18:55.652353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:22.200 [2024-12-14 01:18:55.652372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.200 [2024-12-14 01:18:55.655414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.200 [2024-12-14 01:18:55.655509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:22.200 [2024-12-14 01:18:55.655552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.015 ms 00:19:22.200 [2024-12-14 01:18:55.655576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.201 [2024-12-14 01:18:55.655650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.201 [2024-12-14 01:18:55.655727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:22.201 [2024-12-14 01:18:55.655814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:22.201 [2024-12-14 01:18:55.655834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.201 [2024-12-14 01:18:55.656801] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:22.201 [2024-12-14 01:18:55.657728] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 104.835 ms, result 0 00:19:22.201 [2024-12-14 01:18:55.658732] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:22.201 Some configs were skipped because the RPC state that can call them passed over. 00:19:22.201 01:18:55 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:22.462 [2024-12-14 01:18:55.881789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.462 [2024-12-14 01:18:55.881885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:22.462 [2024-12-14 01:18:55.881975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.407 ms 00:19:22.462 [2024-12-14 01:18:55.882000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.462 [2024-12-14 01:18:55.882052] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.668 ms, result 0 00:19:22.462 true 00:19:22.462 01:18:55 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:22.724 [2024-12-14 01:18:56.082445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.724 [2024-12-14 01:18:56.082546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:22.724 [2024-12-14 01:18:56.082587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.897 ms 00:19:22.724 [2024-12-14 01:18:56.082606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.724 [2024-12-14 01:18:56.082653] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.101 ms, result 0 00:19:22.724 true 00:19:22.724 01:18:56 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 89237 00:19:22.724 01:18:56 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89237 ']' 00:19:22.724 01:18:56 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89237 00:19:22.724 01:18:56 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:22.724 01:18:56 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.724 01:18:56 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89237 00:19:22.724 killing process with pid 89237 00:19:22.724 01:18:56 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:22.724 01:18:56 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:22.724 01:18:56 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89237' 00:19:22.724 01:18:56 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89237 00:19:22.724 01:18:56 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89237 00:19:22.724 [2024-12-14 01:18:56.207931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.724 [2024-12-14 01:18:56.207973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:22.724 [2024-12-14 01:18:56.207983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:22.724 [2024-12-14 01:18:56.207989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.724 [2024-12-14 01:18:56.208008] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:22.724 [2024-12-14 01:18:56.208406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.724 [2024-12-14 01:18:56.208426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:22.724 [2024-12-14 01:18:56.208435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:19:22.724 [2024-12-14 01:18:56.208442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.724 [2024-12-14 01:18:56.208836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.724 [2024-12-14 01:18:56.208866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:22.724 [2024-12-14 01:18:56.208885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:19:22.724 [2024-12-14 01:18:56.208903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.724 [2024-12-14 01:18:56.212156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.724 [2024-12-14 01:18:56.212243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:22.724 [2024-12-14 01:18:56.212346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.228 ms 00:19:22.724 [2024-12-14 01:18:56.212376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.724 [2024-12-14 01:18:56.217529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.724 [2024-12-14 01:18:56.217628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:22.724 [2024-12-14 01:18:56.217668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.111 ms 00:19:22.724 [2024-12-14 01:18:56.217689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.724 [2024-12-14 01:18:56.219069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.724 [2024-12-14 01:18:56.219155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:22.724 [2024-12-14 01:18:56.219193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.318 ms 00:19:22.724 [2024-12-14 01:18:56.219211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.724 [2024-12-14 01:18:56.223015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.724 [2024-12-14 01:18:56.223103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:22.724 [2024-12-14 01:18:56.223144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.759 ms 00:19:22.724 [2024-12-14 01:18:56.223165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.724 [2024-12-14 01:18:56.223271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.724 [2024-12-14 01:18:56.223304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:22.724 [2024-12-14 01:18:56.223319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:22.724 [2024-12-14 01:18:56.223358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.724 [2024-12-14 01:18:56.225058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.724 [2024-12-14 01:18:56.225090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:22.724 [2024-12-14 01:18:56.225097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.683 ms 00:19:22.724 [2024-12-14 01:18:56.225105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.724 [2024-12-14 01:18:56.226390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.724 [2024-12-14 01:18:56.226480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:22.724 [2024-12-14 01:18:56.226490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.256 ms 00:19:22.724 [2024-12-14 01:18:56.226497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.724 [2024-12-14 01:18:56.227518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.724 [2024-12-14 01:18:56.227545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:22.724 [2024-12-14 01:18:56.227551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:19:22.724 [2024-12-14 01:18:56.227558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.724 [2024-12-14 01:18:56.228511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.724 [2024-12-14 01:18:56.228540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:22.724 [2024-12-14 01:18:56.228547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.906 ms 00:19:22.724 [2024-12-14 01:18:56.228553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.724 [2024-12-14 01:18:56.228578] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:22.724 [2024-12-14 01:18:56.228591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:22.724 [2024-12-14 01:18:56.228598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:22.724 [2024-12-14 01:18:56.228607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:22.724 [2024-12-14 01:18:56.228613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:22.724 [2024-12-14 01:18:56.228632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:22.724 [2024-12-14 01:18:56.228638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:22.724 [2024-12-14 01:18:56.228647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:22.724 [2024-12-14 01:18:56.228653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:22.724 [2024-12-14 01:18:56.228660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:22.724 [2024-12-14 01:18:56.228666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:22.724 [2024-12-14 01:18:56.228673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:22.724 [2024-12-14 01:18:56.228679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:22.724 [2024-12-14 01:18:56.228686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:22.724 [2024-12-14 01:18:56.228692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:22.724 [2024-12-14 01:18:56.228704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:22.724 [2024-12-14 01:18:56.228709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:22.724 [2024-12-14 01:18:56.228716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:22.724 [2024-12-14 01:18:56.228722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:22.724 [2024-12-14 01:18:56.228729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.228995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:22.725 [2024-12-14 01:18:56.229256] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:22.726 [2024-12-14 01:18:56.229262] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6143b464-b06c-4534-b843-f22084791cb4 00:19:22.726 [2024-12-14 01:18:56.229270] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:22.726 [2024-12-14 01:18:56.229276] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:22.726 [2024-12-14 01:18:56.229286] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:22.726 [2024-12-14 01:18:56.229292] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:22.726 [2024-12-14 01:18:56.229298] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:22.726 [2024-12-14 01:18:56.229307] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:22.726 [2024-12-14 01:18:56.229316] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:22.726 [2024-12-14 01:18:56.229321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:22.726 [2024-12-14 01:18:56.229327] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:22.726 [2024-12-14 01:18:56.229332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.726 [2024-12-14 01:18:56.229338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:22.726 [2024-12-14 01:18:56.229345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:19:22.726 [2024-12-14 01:18:56.229352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.726 [2024-12-14 01:18:56.230596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.726 [2024-12-14 01:18:56.230616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:22.726 [2024-12-14 01:18:56.230634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.230 ms 00:19:22.726 [2024-12-14 01:18:56.230641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.726 [2024-12-14 01:18:56.230715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.726 [2024-12-14 01:18:56.230724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:22.726 [2024-12-14 01:18:56.230730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:22.726 [2024-12-14 01:18:56.230737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.726 [2024-12-14 01:18:56.235249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.726 [2024-12-14 01:18:56.235280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:22.726 [2024-12-14 01:18:56.235288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.726 [2024-12-14 01:18:56.235295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.726 [2024-12-14 01:18:56.235342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.726 [2024-12-14 01:18:56.235351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:22.726 [2024-12-14 01:18:56.235357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.726 [2024-12-14 01:18:56.235366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.726 [2024-12-14 01:18:56.235397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.726 [2024-12-14 01:18:56.235408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:22.726 [2024-12-14 01:18:56.235414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.726 [2024-12-14 01:18:56.235421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.726 [2024-12-14 01:18:56.235435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.726 [2024-12-14 01:18:56.235442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:22.726 [2024-12-14 01:18:56.235448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.726 [2024-12-14 01:18:56.235455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.726 [2024-12-14 01:18:56.243664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.726 [2024-12-14 01:18:56.243806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:22.726 [2024-12-14 01:18:56.243818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.726 [2024-12-14 01:18:56.243830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.726 [2024-12-14 01:18:56.249959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.726 [2024-12-14 01:18:56.249993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:22.726 [2024-12-14 01:18:56.250001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.726 [2024-12-14 01:18:56.250009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.726 [2024-12-14 01:18:56.250040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.726 [2024-12-14 01:18:56.250051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.726 [2024-12-14 01:18:56.250057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.726 [2024-12-14 01:18:56.250064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.726 [2024-12-14 01:18:56.250089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.726 [2024-12-14 01:18:56.250096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.726 [2024-12-14 01:18:56.250102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.726 [2024-12-14 01:18:56.250109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.726 [2024-12-14 01:18:56.250161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.726 [2024-12-14 01:18:56.250174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.726 [2024-12-14 01:18:56.250180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.726 [2024-12-14 01:18:56.250187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.726 [2024-12-14 01:18:56.250209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.726 [2024-12-14 01:18:56.250218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:22.726 [2024-12-14 01:18:56.250224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.726 [2024-12-14 01:18:56.250231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.726 [2024-12-14 01:18:56.250260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.726 [2024-12-14 01:18:56.250269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.726 [2024-12-14 01:18:56.250276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.726 [2024-12-14 01:18:56.250282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.726 [2024-12-14 01:18:56.250316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.726 [2024-12-14 01:18:56.250329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.726 [2024-12-14 01:18:56.250335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.726 [2024-12-14 01:18:56.250341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.726 [2024-12-14 01:18:56.250447] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 42.497 ms, result 0 00:19:22.987 01:18:56 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:22.987 01:18:56 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:22.987 [2024-12-14 01:18:56.471492] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:22.987 [2024-12-14 01:18:56.471636] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89273 ] 00:19:23.248 [2024-12-14 01:18:56.613014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.248 [2024-12-14 01:18:56.630735] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.248 [2024-12-14 01:18:56.712918] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:23.248 [2024-12-14 01:18:56.712972] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:23.249 [2024-12-14 01:18:56.855331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.249 [2024-12-14 01:18:56.855368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:23.249 [2024-12-14 01:18:56.855378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:23.249 [2024-12-14 01:18:56.855384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.249 [2024-12-14 01:18:56.857166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.249 [2024-12-14 01:18:56.857198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:23.249 [2024-12-14 01:18:56.857205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.769 ms 00:19:23.249 [2024-12-14 01:18:56.857211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.249 [2024-12-14 01:18:56.857269] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:23.249 [2024-12-14 01:18:56.857451] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:23.249 [2024-12-14 01:18:56.857464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.249 [2024-12-14 01:18:56.857469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:23.249 [2024-12-14 01:18:56.857475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:19:23.249 [2024-12-14 01:18:56.857481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.249 [2024-12-14 01:18:56.858411] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:23.511 [2024-12-14 01:18:56.860654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.511 [2024-12-14 01:18:56.860681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:23.511 [2024-12-14 01:18:56.860689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.244 ms 00:19:23.511 [2024-12-14 01:18:56.860695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.511 [2024-12-14 01:18:56.860750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.511 [2024-12-14 01:18:56.860758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:23.511 [2024-12-14 01:18:56.860765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:23.511 [2024-12-14 01:18:56.860770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.511 [2024-12-14 01:18:56.865335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.511 [2024-12-14 01:18:56.865441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:23.511 [2024-12-14 01:18:56.865452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.534 ms 00:19:23.511 [2024-12-14 01:18:56.865458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.511 [2024-12-14 01:18:56.865550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.511 [2024-12-14 01:18:56.865561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:23.511 [2024-12-14 01:18:56.865567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:23.511 [2024-12-14 01:18:56.865574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.511 [2024-12-14 01:18:56.865592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.511 [2024-12-14 01:18:56.865598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:23.511 [2024-12-14 01:18:56.865604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:23.511 [2024-12-14 01:18:56.865610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.511 [2024-12-14 01:18:56.865636] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:23.511 [2024-12-14 01:18:56.866810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.511 [2024-12-14 01:18:56.866832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:23.511 [2024-12-14 01:18:56.866839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.177 ms 00:19:23.511 [2024-12-14 01:18:56.866847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.511 [2024-12-14 01:18:56.866878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.511 [2024-12-14 01:18:56.866885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:23.511 [2024-12-14 01:18:56.866895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:23.511 [2024-12-14 01:18:56.866903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.511 [2024-12-14 01:18:56.866916] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:23.511 [2024-12-14 01:18:56.866930] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:23.511 [2024-12-14 01:18:56.866960] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:23.511 [2024-12-14 01:18:56.866972] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:23.511 [2024-12-14 01:18:56.867050] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:23.511 [2024-12-14 01:18:56.867058] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:23.511 [2024-12-14 01:18:56.867066] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:23.511 [2024-12-14 01:18:56.867073] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:23.511 [2024-12-14 01:18:56.867080] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:23.511 [2024-12-14 01:18:56.867086] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:23.512 [2024-12-14 01:18:56.867091] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:23.512 [2024-12-14 01:18:56.867097] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:23.512 [2024-12-14 01:18:56.867102] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:23.512 [2024-12-14 01:18:56.867110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.512 [2024-12-14 01:18:56.867116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:23.512 [2024-12-14 01:18:56.867122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:19:23.512 [2024-12-14 01:18:56.867127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.512 [2024-12-14 01:18:56.867194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.512 [2024-12-14 01:18:56.867200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:23.512 [2024-12-14 01:18:56.867206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:23.512 [2024-12-14 01:18:56.867214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.512 [2024-12-14 01:18:56.867286] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:23.512 [2024-12-14 01:18:56.867299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:23.512 [2024-12-14 01:18:56.867305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:23.512 [2024-12-14 01:18:56.867313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.512 [2024-12-14 01:18:56.867321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:23.512 [2024-12-14 01:18:56.867327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:23.512 [2024-12-14 01:18:56.867332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:23.512 [2024-12-14 01:18:56.867337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:23.512 [2024-12-14 01:18:56.867342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:23.512 [2024-12-14 01:18:56.867348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:23.512 [2024-12-14 01:18:56.867353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:23.512 [2024-12-14 01:18:56.867358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:23.512 [2024-12-14 01:18:56.867363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:23.512 [2024-12-14 01:18:56.867368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:23.512 [2024-12-14 01:18:56.867375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:23.512 [2024-12-14 01:18:56.867380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.512 [2024-12-14 01:18:56.867385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:23.512 [2024-12-14 01:18:56.867390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:23.512 [2024-12-14 01:18:56.867396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.512 [2024-12-14 01:18:56.867401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:23.512 [2024-12-14 01:18:56.867406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:23.512 [2024-12-14 01:18:56.867411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.512 [2024-12-14 01:18:56.867416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:23.512 [2024-12-14 01:18:56.867422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:23.512 [2024-12-14 01:18:56.867427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.512 [2024-12-14 01:18:56.867434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:23.512 [2024-12-14 01:18:56.867440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:23.512 [2024-12-14 01:18:56.867445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.512 [2024-12-14 01:18:56.867449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:23.512 [2024-12-14 01:18:56.867455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:23.512 [2024-12-14 01:18:56.867459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.512 [2024-12-14 01:18:56.867465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:23.512 [2024-12-14 01:18:56.867471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:23.512 [2024-12-14 01:18:56.867477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:23.512 [2024-12-14 01:18:56.867482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:23.512 [2024-12-14 01:18:56.867488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:23.512 [2024-12-14 01:18:56.867493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:23.512 [2024-12-14 01:18:56.867498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:23.512 [2024-12-14 01:18:56.867504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:23.512 [2024-12-14 01:18:56.867509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.512 [2024-12-14 01:18:56.867515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:23.512 [2024-12-14 01:18:56.867523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:23.512 [2024-12-14 01:18:56.867529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.512 [2024-12-14 01:18:56.867534] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:23.512 [2024-12-14 01:18:56.867540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:23.512 [2024-12-14 01:18:56.867549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:23.512 [2024-12-14 01:18:56.867556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.512 [2024-12-14 01:18:56.867562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:23.512 [2024-12-14 01:18:56.867567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:23.512 [2024-12-14 01:18:56.867572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:23.512 [2024-12-14 01:18:56.867577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:23.512 [2024-12-14 01:18:56.867582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:23.512 [2024-12-14 01:18:56.867587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:23.512 [2024-12-14 01:18:56.867593] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:23.512 [2024-12-14 01:18:56.867599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:23.512 [2024-12-14 01:18:56.867606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:23.512 [2024-12-14 01:18:56.867611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:23.512 [2024-12-14 01:18:56.867618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:23.512 [2024-12-14 01:18:56.867634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:23.512 [2024-12-14 01:18:56.867639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:23.512 [2024-12-14 01:18:56.867645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:23.512 [2024-12-14 01:18:56.867650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:23.512 [2024-12-14 01:18:56.867659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:23.512 [2024-12-14 01:18:56.867664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:23.512 [2024-12-14 01:18:56.867669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:23.512 [2024-12-14 01:18:56.867674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:23.512 [2024-12-14 01:18:56.867679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:23.512 [2024-12-14 01:18:56.867684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:23.512 [2024-12-14 01:18:56.867689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:23.512 [2024-12-14 01:18:56.867694] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:23.512 [2024-12-14 01:18:56.867702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:23.512 [2024-12-14 01:18:56.867707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:23.512 [2024-12-14 01:18:56.867713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:23.512 [2024-12-14 01:18:56.867719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:23.512 [2024-12-14 01:18:56.867725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:23.512 [2024-12-14 01:18:56.867730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.512 [2024-12-14 01:18:56.867738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:23.512 [2024-12-14 01:18:56.867743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:19:23.512 [2024-12-14 01:18:56.867750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.512 [2024-12-14 01:18:56.875528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.513 [2024-12-14 01:18:56.875555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:23.513 [2024-12-14 01:18:56.875563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.741 ms 00:19:23.513 [2024-12-14 01:18:56.875569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.513 [2024-12-14 01:18:56.875668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.513 [2024-12-14 01:18:56.875682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:23.513 [2024-12-14 01:18:56.875688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:23.513 [2024-12-14 01:18:56.875693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.513 [2024-12-14 01:18:56.900427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.513 [2024-12-14 01:18:56.900517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:23.513 [2024-12-14 01:18:56.900552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.711 ms 00:19:23.513 [2024-12-14 01:18:56.900581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.513 [2024-12-14 01:18:56.900799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.513 [2024-12-14 01:18:56.900839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:23.513 [2024-12-14 01:18:56.900864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:23.513 [2024-12-14 01:18:56.900883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.513 [2024-12-14 01:18:56.901374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.513 [2024-12-14 01:18:56.901499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:23.513 [2024-12-14 01:18:56.901532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:19:23.513 [2024-12-14 01:18:56.901553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.513 [2024-12-14 01:18:56.901879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.513 [2024-12-14 01:18:56.902029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:23.513 [2024-12-14 01:18:56.902040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:19:23.513 [2024-12-14 01:18:56.902050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.513 [2024-12-14 01:18:56.906768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.513 [2024-12-14 01:18:56.906793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:23.513 [2024-12-14 01:18:56.906801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.699 ms 00:19:23.513 [2024-12-14 01:18:56.906807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.513 [2024-12-14 01:18:56.908936] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:23.513 [2024-12-14 01:18:56.909032] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:23.513 [2024-12-14 01:18:56.909044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.513 [2024-12-14 01:18:56.909050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:23.513 [2024-12-14 01:18:56.909056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.164 ms 00:19:23.513 [2024-12-14 01:18:56.909061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.513 [2024-12-14 01:18:56.920179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.513 [2024-12-14 01:18:56.920290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:23.513 [2024-12-14 01:18:56.920304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.086 ms 00:19:23.513 [2024-12-14 01:18:56.920311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.513 [2024-12-14 01:18:56.921796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.513 [2024-12-14 01:18:56.921824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:23.513 [2024-12-14 01:18:56.921832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:19:23.513 [2024-12-14 01:18:56.921838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.513 [2024-12-14 01:18:56.923127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.513 [2024-12-14 01:18:56.923152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:23.513 [2024-12-14 01:18:56.923159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.258 ms 00:19:23.513 [2024-12-14 01:18:56.923170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.513 [2024-12-14 01:18:56.923414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.513 [2024-12-14 01:18:56.923538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:23.513 [2024-12-14 01:18:56.923550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:19:23.513 [2024-12-14 01:18:56.923557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.513 [2024-12-14 01:18:56.937856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.513 [2024-12-14 01:18:56.937983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:23.513 [2024-12-14 01:18:56.937997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.278 ms 00:19:23.513 [2024-12-14 01:18:56.938004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.513 [2024-12-14 01:18:56.943718] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:23.513 [2024-12-14 01:18:56.955724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.513 [2024-12-14 01:18:56.955757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:23.513 [2024-12-14 01:18:56.955765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.674 ms 00:19:23.513 [2024-12-14 01:18:56.955774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.513 [2024-12-14 01:18:56.955848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.513 [2024-12-14 01:18:56.955857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:23.513 [2024-12-14 01:18:56.955868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:23.513 [2024-12-14 01:18:56.955873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.513 [2024-12-14 01:18:56.955909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.513 [2024-12-14 01:18:56.955916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:23.513 [2024-12-14 01:18:56.955924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:23.513 [2024-12-14 01:18:56.955930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.513 [2024-12-14 01:18:56.955948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.513 [2024-12-14 01:18:56.955955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:23.513 [2024-12-14 01:18:56.955960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:23.513 [2024-12-14 01:18:56.955968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.513 [2024-12-14 01:18:56.955992] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:23.513 [2024-12-14 01:18:56.955999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.513 [2024-12-14 01:18:56.956004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:23.513 [2024-12-14 01:18:56.956010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:23.513 [2024-12-14 01:18:56.956016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.513 [2024-12-14 01:18:56.959098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.513 [2024-12-14 01:18:56.959129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:23.513 [2024-12-14 01:18:56.959137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.068 ms 00:19:23.513 [2024-12-14 01:18:56.959149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.513 [2024-12-14 01:18:56.959211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.513 [2024-12-14 01:18:56.959220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:23.513 [2024-12-14 01:18:56.959228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:23.513 [2024-12-14 01:18:56.959234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.513 [2024-12-14 01:18:56.959874] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:23.513 [2024-12-14 01:18:56.960702] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 104.339 ms, result 0 00:19:23.513 [2024-12-14 01:18:56.961287] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:23.513 [2024-12-14 01:18:56.969114] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:24.456  [2024-12-14T01:18:59.010Z] Copying: 22/256 [MB] (22 MBps) [2024-12-14T01:19:00.395Z] Copying: 38/256 [MB] (15 MBps) [2024-12-14T01:19:01.339Z] Copying: 60/256 [MB] (22 MBps) [2024-12-14T01:19:02.283Z] Copying: 85/256 [MB] (24 MBps) [2024-12-14T01:19:03.229Z] Copying: 103/256 [MB] (18 MBps) [2024-12-14T01:19:04.175Z] Copying: 119/256 [MB] (16 MBps) [2024-12-14T01:19:05.120Z] Copying: 136/256 [MB] (16 MBps) [2024-12-14T01:19:06.065Z] Copying: 146/256 [MB] (10 MBps) [2024-12-14T01:19:07.013Z] Copying: 165/256 [MB] (19 MBps) [2024-12-14T01:19:08.470Z] Copying: 189/256 [MB] (23 MBps) [2024-12-14T01:19:09.052Z] Copying: 211/256 [MB] (22 MBps) [2024-12-14T01:19:09.997Z] Copying: 229/256 [MB] (18 MBps) [2024-12-14T01:19:10.258Z] Copying: 250/256 [MB] (20 MBps) [2024-12-14T01:19:10.258Z] Copying: 256/256 [MB] (average 19 MBps)[2024-12-14 01:19:10.250912] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:36.646 [2024-12-14 01:19:10.252718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.646 [2024-12-14 01:19:10.252769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:36.646 [2024-12-14 01:19:10.252783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:36.646 [2024-12-14 01:19:10.252792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.646 [2024-12-14 01:19:10.252815] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:36.646 [2024-12-14 01:19:10.253526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.646 [2024-12-14 01:19:10.253563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:36.646 [2024-12-14 01:19:10.253575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:19:36.646 [2024-12-14 01:19:10.253584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.646 [2024-12-14 01:19:10.253874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.646 [2024-12-14 01:19:10.253907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:36.646 [2024-12-14 01:19:10.253921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:19:36.646 [2024-12-14 01:19:10.253930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.910 [2024-12-14 01:19:10.257640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.910 [2024-12-14 01:19:10.257666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:36.910 [2024-12-14 01:19:10.257677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.694 ms 00:19:36.910 [2024-12-14 01:19:10.257686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.910 [2024-12-14 01:19:10.264581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.910 [2024-12-14 01:19:10.264641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:36.910 [2024-12-14 01:19:10.264654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.851 ms 00:19:36.910 [2024-12-14 01:19:10.264668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.910 [2024-12-14 01:19:10.267426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.910 [2024-12-14 01:19:10.267640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:36.910 [2024-12-14 01:19:10.267660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.677 ms 00:19:36.910 [2024-12-14 01:19:10.267667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.910 [2024-12-14 01:19:10.272249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.910 [2024-12-14 01:19:10.272304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:36.910 [2024-12-14 01:19:10.272315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.536 ms 00:19:36.910 [2024-12-14 01:19:10.272333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.910 [2024-12-14 01:19:10.272473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.910 [2024-12-14 01:19:10.272486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:36.910 [2024-12-14 01:19:10.272498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:19:36.910 [2024-12-14 01:19:10.272511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.910 [2024-12-14 01:19:10.275805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.910 [2024-12-14 01:19:10.275851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:36.910 [2024-12-14 01:19:10.275863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.275 ms 00:19:36.910 [2024-12-14 01:19:10.275870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.910 [2024-12-14 01:19:10.278639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.910 [2024-12-14 01:19:10.278682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:36.910 [2024-12-14 01:19:10.278692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.724 ms 00:19:36.910 [2024-12-14 01:19:10.278700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.910 [2024-12-14 01:19:10.280991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.910 [2024-12-14 01:19:10.281039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:36.910 [2024-12-14 01:19:10.281049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.246 ms 00:19:36.910 [2024-12-14 01:19:10.281056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.910 [2024-12-14 01:19:10.283271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.910 [2024-12-14 01:19:10.283323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:36.910 [2024-12-14 01:19:10.283332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.141 ms 00:19:36.910 [2024-12-14 01:19:10.283340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.910 [2024-12-14 01:19:10.283383] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:36.910 [2024-12-14 01:19:10.283399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:36.910 [2024-12-14 01:19:10.283795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.283994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:36.911 [2024-12-14 01:19:10.284236] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:36.911 [2024-12-14 01:19:10.284250] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6143b464-b06c-4534-b843-f22084791cb4 00:19:36.911 [2024-12-14 01:19:10.284258] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:36.911 [2024-12-14 01:19:10.284266] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:36.911 [2024-12-14 01:19:10.284273] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:36.911 [2024-12-14 01:19:10.284282] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:36.911 [2024-12-14 01:19:10.284290] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:36.911 [2024-12-14 01:19:10.284298] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:36.911 [2024-12-14 01:19:10.284311] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:36.911 [2024-12-14 01:19:10.284317] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:36.911 [2024-12-14 01:19:10.284323] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:36.911 [2024-12-14 01:19:10.284332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.911 [2024-12-14 01:19:10.284340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:36.911 [2024-12-14 01:19:10.284349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:19:36.911 [2024-12-14 01:19:10.284357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.911 [2024-12-14 01:19:10.286705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.911 [2024-12-14 01:19:10.286736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:36.911 [2024-12-14 01:19:10.286746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.328 ms 00:19:36.911 [2024-12-14 01:19:10.286760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.911 [2024-12-14 01:19:10.286886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.911 [2024-12-14 01:19:10.286898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:36.911 [2024-12-14 01:19:10.286908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:19:36.911 [2024-12-14 01:19:10.286921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.911 [2024-12-14 01:19:10.294780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.911 [2024-12-14 01:19:10.294829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:36.911 [2024-12-14 01:19:10.294840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.911 [2024-12-14 01:19:10.294855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.911 [2024-12-14 01:19:10.294925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.911 [2024-12-14 01:19:10.294935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:36.911 [2024-12-14 01:19:10.294943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.911 [2024-12-14 01:19:10.294951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.911 [2024-12-14 01:19:10.295008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.911 [2024-12-14 01:19:10.295020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:36.911 [2024-12-14 01:19:10.295028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.911 [2024-12-14 01:19:10.295036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.911 [2024-12-14 01:19:10.295056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.911 [2024-12-14 01:19:10.295065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:36.911 [2024-12-14 01:19:10.295073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.911 [2024-12-14 01:19:10.295084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.911 [2024-12-14 01:19:10.308242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.911 [2024-12-14 01:19:10.308291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:36.911 [2024-12-14 01:19:10.308301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.911 [2024-12-14 01:19:10.308313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.911 [2024-12-14 01:19:10.318270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.911 [2024-12-14 01:19:10.318322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:36.911 [2024-12-14 01:19:10.318333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.911 [2024-12-14 01:19:10.318341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.911 [2024-12-14 01:19:10.318390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.912 [2024-12-14 01:19:10.318400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:36.912 [2024-12-14 01:19:10.318409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.912 [2024-12-14 01:19:10.318417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.912 [2024-12-14 01:19:10.318457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.912 [2024-12-14 01:19:10.318470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:36.912 [2024-12-14 01:19:10.318480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.912 [2024-12-14 01:19:10.318488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.912 [2024-12-14 01:19:10.318563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.912 [2024-12-14 01:19:10.318575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:36.912 [2024-12-14 01:19:10.318591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.912 [2024-12-14 01:19:10.318602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.912 [2024-12-14 01:19:10.318655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.912 [2024-12-14 01:19:10.318669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:36.912 [2024-12-14 01:19:10.318680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.912 [2024-12-14 01:19:10.318688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.912 [2024-12-14 01:19:10.318733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.912 [2024-12-14 01:19:10.318743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:36.912 [2024-12-14 01:19:10.318752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.912 [2024-12-14 01:19:10.318762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.912 [2024-12-14 01:19:10.318806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.912 [2024-12-14 01:19:10.318820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:36.912 [2024-12-14 01:19:10.318831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.912 [2024-12-14 01:19:10.318839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.912 [2024-12-14 01:19:10.318982] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.240 ms, result 0 00:19:36.912 00:19:36.912 00:19:37.173 01:19:10 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:37.173 01:19:10 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:37.745 01:19:11 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:37.745 [2024-12-14 01:19:11.143768] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:37.745 [2024-12-14 01:19:11.143873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89427 ] 00:19:37.745 [2024-12-14 01:19:11.286496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.745 [2024-12-14 01:19:11.307776] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.006 [2024-12-14 01:19:11.414160] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:38.006 [2024-12-14 01:19:11.414489] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:38.006 [2024-12-14 01:19:11.574943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.006 [2024-12-14 01:19:11.575003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:38.006 [2024-12-14 01:19:11.575023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:38.006 [2024-12-14 01:19:11.575032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.006 [2024-12-14 01:19:11.577615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.006 [2024-12-14 01:19:11.577689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:38.006 [2024-12-14 01:19:11.577701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.561 ms 00:19:38.006 [2024-12-14 01:19:11.577710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.006 [2024-12-14 01:19:11.577828] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:38.006 [2024-12-14 01:19:11.578093] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:38.006 [2024-12-14 01:19:11.578113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.006 [2024-12-14 01:19:11.578122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:38.006 [2024-12-14 01:19:11.578135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:19:38.006 [2024-12-14 01:19:11.578143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.006 [2024-12-14 01:19:11.580029] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:38.006 [2024-12-14 01:19:11.583930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.006 [2024-12-14 01:19:11.583990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:38.006 [2024-12-14 01:19:11.584005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.905 ms 00:19:38.006 [2024-12-14 01:19:11.584014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.006 [2024-12-14 01:19:11.584118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.006 [2024-12-14 01:19:11.584130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:38.006 [2024-12-14 01:19:11.584139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:38.006 [2024-12-14 01:19:11.584146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.006 [2024-12-14 01:19:11.592377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.006 [2024-12-14 01:19:11.592603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:38.006 [2024-12-14 01:19:11.592647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.182 ms 00:19:38.006 [2024-12-14 01:19:11.592656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.006 [2024-12-14 01:19:11.592810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.006 [2024-12-14 01:19:11.592823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:38.006 [2024-12-14 01:19:11.592838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:38.006 [2024-12-14 01:19:11.592854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.006 [2024-12-14 01:19:11.592882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.006 [2024-12-14 01:19:11.592891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:38.006 [2024-12-14 01:19:11.592900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:38.006 [2024-12-14 01:19:11.592908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.006 [2024-12-14 01:19:11.592930] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:38.006 [2024-12-14 01:19:11.594982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.006 [2024-12-14 01:19:11.595018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:38.006 [2024-12-14 01:19:11.595029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.058 ms 00:19:38.006 [2024-12-14 01:19:11.595042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.006 [2024-12-14 01:19:11.595089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.006 [2024-12-14 01:19:11.595102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:38.006 [2024-12-14 01:19:11.595110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:38.007 [2024-12-14 01:19:11.595118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.007 [2024-12-14 01:19:11.595137] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:38.007 [2024-12-14 01:19:11.595158] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:38.007 [2024-12-14 01:19:11.595200] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:38.007 [2024-12-14 01:19:11.595227] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:38.007 [2024-12-14 01:19:11.595333] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:38.007 [2024-12-14 01:19:11.595345] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:38.007 [2024-12-14 01:19:11.595356] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:38.007 [2024-12-14 01:19:11.595367] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:38.007 [2024-12-14 01:19:11.595379] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:38.007 [2024-12-14 01:19:11.595387] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:38.007 [2024-12-14 01:19:11.595395] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:38.007 [2024-12-14 01:19:11.595402] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:38.007 [2024-12-14 01:19:11.595410] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:38.007 [2024-12-14 01:19:11.595426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.007 [2024-12-14 01:19:11.595434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:38.007 [2024-12-14 01:19:11.595442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:19:38.007 [2024-12-14 01:19:11.595449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.007 [2024-12-14 01:19:11.595539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.007 [2024-12-14 01:19:11.595549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:38.007 [2024-12-14 01:19:11.595559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:38.007 [2024-12-14 01:19:11.595566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.007 [2024-12-14 01:19:11.595704] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:38.007 [2024-12-14 01:19:11.595722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:38.007 [2024-12-14 01:19:11.595733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.007 [2024-12-14 01:19:11.595743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.007 [2024-12-14 01:19:11.595753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:38.007 [2024-12-14 01:19:11.595761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:38.007 [2024-12-14 01:19:11.595769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:38.007 [2024-12-14 01:19:11.595781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:38.007 [2024-12-14 01:19:11.595792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:38.007 [2024-12-14 01:19:11.595800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.007 [2024-12-14 01:19:11.595809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:38.007 [2024-12-14 01:19:11.595816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:38.007 [2024-12-14 01:19:11.595826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.007 [2024-12-14 01:19:11.595835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:38.007 [2024-12-14 01:19:11.595843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:38.007 [2024-12-14 01:19:11.595858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.007 [2024-12-14 01:19:11.595868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:38.007 [2024-12-14 01:19:11.595877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:38.007 [2024-12-14 01:19:11.595885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.007 [2024-12-14 01:19:11.595892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:38.007 [2024-12-14 01:19:11.595900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:38.007 [2024-12-14 01:19:11.595908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.007 [2024-12-14 01:19:11.595917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:38.007 [2024-12-14 01:19:11.595932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:38.007 [2024-12-14 01:19:11.595940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.007 [2024-12-14 01:19:11.595947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:38.007 [2024-12-14 01:19:11.595956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:38.007 [2024-12-14 01:19:11.595964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.007 [2024-12-14 01:19:11.595971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:38.007 [2024-12-14 01:19:11.595978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:38.007 [2024-12-14 01:19:11.595986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.007 [2024-12-14 01:19:11.595994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:38.007 [2024-12-14 01:19:11.596002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:38.007 [2024-12-14 01:19:11.596010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.007 [2024-12-14 01:19:11.596017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:38.007 [2024-12-14 01:19:11.596023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:38.007 [2024-12-14 01:19:11.596030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.007 [2024-12-14 01:19:11.596036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:38.007 [2024-12-14 01:19:11.596043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:38.007 [2024-12-14 01:19:11.596053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.007 [2024-12-14 01:19:11.596060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:38.007 [2024-12-14 01:19:11.596067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:38.007 [2024-12-14 01:19:11.596073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.007 [2024-12-14 01:19:11.596080] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:38.007 [2024-12-14 01:19:11.596088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:38.007 [2024-12-14 01:19:11.596095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.007 [2024-12-14 01:19:11.596102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.007 [2024-12-14 01:19:11.596110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:38.007 [2024-12-14 01:19:11.596119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:38.007 [2024-12-14 01:19:11.596126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:38.007 [2024-12-14 01:19:11.596135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:38.007 [2024-12-14 01:19:11.596142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:38.007 [2024-12-14 01:19:11.596149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:38.007 [2024-12-14 01:19:11.596157] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:38.007 [2024-12-14 01:19:11.596166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.007 [2024-12-14 01:19:11.596176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:38.007 [2024-12-14 01:19:11.596185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:38.007 [2024-12-14 01:19:11.596193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:38.007 [2024-12-14 01:19:11.596200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:38.007 [2024-12-14 01:19:11.596207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:38.007 [2024-12-14 01:19:11.596214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:38.007 [2024-12-14 01:19:11.596222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:38.007 [2024-12-14 01:19:11.596236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:38.007 [2024-12-14 01:19:11.596243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:38.007 [2024-12-14 01:19:11.596250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:38.007 [2024-12-14 01:19:11.596257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:38.007 [2024-12-14 01:19:11.596264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:38.007 [2024-12-14 01:19:11.596273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:38.007 [2024-12-14 01:19:11.596281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:38.007 [2024-12-14 01:19:11.596288] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:38.007 [2024-12-14 01:19:11.596298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.007 [2024-12-14 01:19:11.596309] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:38.007 [2024-12-14 01:19:11.596317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:38.007 [2024-12-14 01:19:11.596324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:38.007 [2024-12-14 01:19:11.596331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:38.007 [2024-12-14 01:19:11.596338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.007 [2024-12-14 01:19:11.596346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:38.007 [2024-12-14 01:19:11.596355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:19:38.008 [2024-12-14 01:19:11.596363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.008 [2024-12-14 01:19:11.609507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.008 [2024-12-14 01:19:11.609553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:38.008 [2024-12-14 01:19:11.609565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.087 ms 00:19:38.008 [2024-12-14 01:19:11.609574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.008 [2024-12-14 01:19:11.609740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.008 [2024-12-14 01:19:11.609760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:38.008 [2024-12-14 01:19:11.609771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:38.008 [2024-12-14 01:19:11.609779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.269 [2024-12-14 01:19:11.627920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.269 [2024-12-14 01:19:11.627970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:38.269 [2024-12-14 01:19:11.627983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.117 ms 00:19:38.269 [2024-12-14 01:19:11.627991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.269 [2024-12-14 01:19:11.628085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.269 [2024-12-14 01:19:11.628097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:38.269 [2024-12-14 01:19:11.628106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:38.269 [2024-12-14 01:19:11.628114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.269 [2024-12-14 01:19:11.628570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.269 [2024-12-14 01:19:11.628602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:38.269 [2024-12-14 01:19:11.628614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:19:38.269 [2024-12-14 01:19:11.628643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.269 [2024-12-14 01:19:11.628799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.269 [2024-12-14 01:19:11.628813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:38.269 [2024-12-14 01:19:11.628823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:19:38.269 [2024-12-14 01:19:11.628835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.269 [2024-12-14 01:19:11.636504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.269 [2024-12-14 01:19:11.636549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:38.269 [2024-12-14 01:19:11.636561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.645 ms 00:19:38.269 [2024-12-14 01:19:11.636576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.269 [2024-12-14 01:19:11.640291] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:38.269 [2024-12-14 01:19:11.640497] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:38.269 [2024-12-14 01:19:11.640517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.269 [2024-12-14 01:19:11.640527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:38.269 [2024-12-14 01:19:11.640537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.800 ms 00:19:38.269 [2024-12-14 01:19:11.640547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.269 [2024-12-14 01:19:11.656298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.270 [2024-12-14 01:19:11.656363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:38.270 [2024-12-14 01:19:11.656376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.691 ms 00:19:38.270 [2024-12-14 01:19:11.656385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.270 [2024-12-14 01:19:11.659441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.270 [2024-12-14 01:19:11.659614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:38.270 [2024-12-14 01:19:11.659647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.957 ms 00:19:38.270 [2024-12-14 01:19:11.659656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.270 [2024-12-14 01:19:11.662304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.270 [2024-12-14 01:19:11.662360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:38.270 [2024-12-14 01:19:11.662371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.599 ms 00:19:38.270 [2024-12-14 01:19:11.662379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.270 [2024-12-14 01:19:11.662883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.270 [2024-12-14 01:19:11.662929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:38.270 [2024-12-14 01:19:11.662954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:19:38.270 [2024-12-14 01:19:11.663030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.270 [2024-12-14 01:19:11.686546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.270 [2024-12-14 01:19:11.686774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:38.270 [2024-12-14 01:19:11.686797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.468 ms 00:19:38.270 [2024-12-14 01:19:11.686806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.270 [2024-12-14 01:19:11.694849] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:38.270 [2024-12-14 01:19:11.712917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.270 [2024-12-14 01:19:11.712976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:38.270 [2024-12-14 01:19:11.712989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.028 ms 00:19:38.270 [2024-12-14 01:19:11.712999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.270 [2024-12-14 01:19:11.713083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.270 [2024-12-14 01:19:11.713094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:38.270 [2024-12-14 01:19:11.713108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:38.270 [2024-12-14 01:19:11.713117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.270 [2024-12-14 01:19:11.713174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.270 [2024-12-14 01:19:11.713184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:38.270 [2024-12-14 01:19:11.713193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:38.270 [2024-12-14 01:19:11.713203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.270 [2024-12-14 01:19:11.713233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.270 [2024-12-14 01:19:11.713242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:38.270 [2024-12-14 01:19:11.713251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:38.270 [2024-12-14 01:19:11.713262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.270 [2024-12-14 01:19:11.713301] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:38.270 [2024-12-14 01:19:11.713313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.270 [2024-12-14 01:19:11.713321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:38.270 [2024-12-14 01:19:11.713334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:38.270 [2024-12-14 01:19:11.713342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.270 [2024-12-14 01:19:11.719079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.270 [2024-12-14 01:19:11.719125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:38.270 [2024-12-14 01:19:11.719145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.715 ms 00:19:38.270 [2024-12-14 01:19:11.719154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.270 [2024-12-14 01:19:11.719249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.270 [2024-12-14 01:19:11.719260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:38.270 [2024-12-14 01:19:11.719269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:38.270 [2024-12-14 01:19:11.719277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.270 [2024-12-14 01:19:11.720792] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:38.270 [2024-12-14 01:19:11.722260] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 145.501 ms, result 0 00:19:38.270 [2024-12-14 01:19:11.723181] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:38.270 [2024-12-14 01:19:11.730942] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:38.531  [2024-12-14T01:19:12.143Z] Copying: 4096/4096 [kB] (average 10240 kBps)[2024-12-14 01:19:12.132399] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:38.531 [2024-12-14 01:19:12.133556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.531 [2024-12-14 01:19:12.133607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:38.531 [2024-12-14 01:19:12.133646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:38.531 [2024-12-14 01:19:12.133656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.531 [2024-12-14 01:19:12.133694] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:38.532 [2024-12-14 01:19:12.134382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.532 [2024-12-14 01:19:12.134424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:38.532 [2024-12-14 01:19:12.134437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:19:38.532 [2024-12-14 01:19:12.134446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.532 [2024-12-14 01:19:12.136646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.532 [2024-12-14 01:19:12.136691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:38.532 [2024-12-14 01:19:12.136706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.171 ms 00:19:38.532 [2024-12-14 01:19:12.136714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.532 [2024-12-14 01:19:12.141233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.532 [2024-12-14 01:19:12.141271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:38.532 [2024-12-14 01:19:12.141284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.500 ms 00:19:38.532 [2024-12-14 01:19:12.141294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.795 [2024-12-14 01:19:12.148195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.795 [2024-12-14 01:19:12.148236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:38.795 [2024-12-14 01:19:12.148247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.865 ms 00:19:38.795 [2024-12-14 01:19:12.148261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.795 [2024-12-14 01:19:12.150974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.795 [2024-12-14 01:19:12.151171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:38.795 [2024-12-14 01:19:12.151189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.640 ms 00:19:38.795 [2024-12-14 01:19:12.151197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.795 [2024-12-14 01:19:12.156435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.795 [2024-12-14 01:19:12.156489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:38.795 [2024-12-14 01:19:12.156501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.194 ms 00:19:38.795 [2024-12-14 01:19:12.156509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.795 [2024-12-14 01:19:12.156672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.795 [2024-12-14 01:19:12.156687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:38.795 [2024-12-14 01:19:12.156701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:19:38.795 [2024-12-14 01:19:12.156714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.795 [2024-12-14 01:19:12.159833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.795 [2024-12-14 01:19:12.159883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:38.795 [2024-12-14 01:19:12.159894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.100 ms 00:19:38.795 [2024-12-14 01:19:12.159902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.795 [2024-12-14 01:19:12.162750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.795 [2024-12-14 01:19:12.162797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:38.795 [2024-12-14 01:19:12.162807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.801 ms 00:19:38.795 [2024-12-14 01:19:12.162814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.795 [2024-12-14 01:19:12.165220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.795 [2024-12-14 01:19:12.165269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:38.795 [2024-12-14 01:19:12.165279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.359 ms 00:19:38.795 [2024-12-14 01:19:12.165286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.795 [2024-12-14 01:19:12.167668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.795 [2024-12-14 01:19:12.167845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:38.795 [2024-12-14 01:19:12.167916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.306 ms 00:19:38.795 [2024-12-14 01:19:12.167939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.795 [2024-12-14 01:19:12.167990] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:38.795 [2024-12-14 01:19:12.168022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:38.795 [2024-12-14 01:19:12.168055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:38.795 [2024-12-14 01:19:12.168086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:38.795 [2024-12-14 01:19:12.168106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:38.795 [2024-12-14 01:19:12.168114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:38.795 [2024-12-14 01:19:12.168121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:38.795 [2024-12-14 01:19:12.168129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:38.795 [2024-12-14 01:19:12.168137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:38.795 [2024-12-14 01:19:12.168145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:38.795 [2024-12-14 01:19:12.168154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:38.795 [2024-12-14 01:19:12.168162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:38.795 [2024-12-14 01:19:12.168169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:38.795 [2024-12-14 01:19:12.168176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:38.796 [2024-12-14 01:19:12.168869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:38.797 [2024-12-14 01:19:12.168877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:38.797 [2024-12-14 01:19:12.168884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:38.797 [2024-12-14 01:19:12.168894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:38.797 [2024-12-14 01:19:12.168903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:38.797 [2024-12-14 01:19:12.168910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:38.797 [2024-12-14 01:19:12.168918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:38.797 [2024-12-14 01:19:12.168925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:38.797 [2024-12-14 01:19:12.168933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:38.797 [2024-12-14 01:19:12.168943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:38.797 [2024-12-14 01:19:12.168960] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:38.797 [2024-12-14 01:19:12.168968] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6143b464-b06c-4534-b843-f22084791cb4 00:19:38.797 [2024-12-14 01:19:12.168977] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:38.797 [2024-12-14 01:19:12.168987] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:38.797 [2024-12-14 01:19:12.168994] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:38.797 [2024-12-14 01:19:12.169003] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:38.797 [2024-12-14 01:19:12.169011] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:38.797 [2024-12-14 01:19:12.169024] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:38.797 [2024-12-14 01:19:12.169042] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:38.797 [2024-12-14 01:19:12.169048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:38.797 [2024-12-14 01:19:12.169057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:38.797 [2024-12-14 01:19:12.169065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.797 [2024-12-14 01:19:12.169074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:38.797 [2024-12-14 01:19:12.169086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.076 ms 00:19:38.797 [2024-12-14 01:19:12.169095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.797 [2024-12-14 01:19:12.171270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.797 [2024-12-14 01:19:12.171305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:38.797 [2024-12-14 01:19:12.171316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.110 ms 00:19:38.797 [2024-12-14 01:19:12.171332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.797 [2024-12-14 01:19:12.171448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.797 [2024-12-14 01:19:12.171459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:38.797 [2024-12-14 01:19:12.171470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:19:38.797 [2024-12-14 01:19:12.171478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.797 [2024-12-14 01:19:12.179996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.797 [2024-12-14 01:19:12.180054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:38.797 [2024-12-14 01:19:12.180064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.797 [2024-12-14 01:19:12.180076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.797 [2024-12-14 01:19:12.180158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.797 [2024-12-14 01:19:12.180167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:38.797 [2024-12-14 01:19:12.180176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.797 [2024-12-14 01:19:12.180189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.797 [2024-12-14 01:19:12.180243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.797 [2024-12-14 01:19:12.180256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:38.797 [2024-12-14 01:19:12.180264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.797 [2024-12-14 01:19:12.180272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.797 [2024-12-14 01:19:12.180292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.797 [2024-12-14 01:19:12.180301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:38.797 [2024-12-14 01:19:12.180309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.797 [2024-12-14 01:19:12.180321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.797 [2024-12-14 01:19:12.193562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.797 [2024-12-14 01:19:12.193613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:38.797 [2024-12-14 01:19:12.193648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.797 [2024-12-14 01:19:12.193663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.797 [2024-12-14 01:19:12.203658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.797 [2024-12-14 01:19:12.203702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:38.797 [2024-12-14 01:19:12.203713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.797 [2024-12-14 01:19:12.203721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.797 [2024-12-14 01:19:12.203766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.797 [2024-12-14 01:19:12.203777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:38.797 [2024-12-14 01:19:12.203785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.797 [2024-12-14 01:19:12.203794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.797 [2024-12-14 01:19:12.203834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.797 [2024-12-14 01:19:12.203843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:38.797 [2024-12-14 01:19:12.203851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.797 [2024-12-14 01:19:12.203859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.797 [2024-12-14 01:19:12.203933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.797 [2024-12-14 01:19:12.203945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:38.797 [2024-12-14 01:19:12.203961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.797 [2024-12-14 01:19:12.203969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.797 [2024-12-14 01:19:12.204005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.797 [2024-12-14 01:19:12.204018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:38.797 [2024-12-14 01:19:12.204026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.797 [2024-12-14 01:19:12.204034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.797 [2024-12-14 01:19:12.204075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.797 [2024-12-14 01:19:12.204090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:38.797 [2024-12-14 01:19:12.204098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.797 [2024-12-14 01:19:12.204106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.797 [2024-12-14 01:19:12.204159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.797 [2024-12-14 01:19:12.204181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:38.797 [2024-12-14 01:19:12.204190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.797 [2024-12-14 01:19:12.204198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.797 [2024-12-14 01:19:12.204343] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.760 ms, result 0 00:19:38.797 00:19:38.797 00:19:39.059 01:19:12 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=89448 00:19:39.059 01:19:12 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 89448 00:19:39.059 01:19:12 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89448 ']' 00:19:39.059 01:19:12 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.059 01:19:12 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.059 01:19:12 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.059 01:19:12 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.059 01:19:12 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:39.059 01:19:12 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:39.059 [2024-12-14 01:19:12.498229] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:39.059 [2024-12-14 01:19:12.498596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89448 ] 00:19:39.059 [2024-12-14 01:19:12.640084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.059 [2024-12-14 01:19:12.668674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.009 01:19:13 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.009 01:19:13 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:40.009 01:19:13 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:40.009 [2024-12-14 01:19:13.569677] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:40.009 [2024-12-14 01:19:13.569755] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:40.271 [2024-12-14 01:19:13.747389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.271 [2024-12-14 01:19:13.747670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:40.271 [2024-12-14 01:19:13.747697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:40.271 [2024-12-14 01:19:13.747714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.271 [2024-12-14 01:19:13.750307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.271 [2024-12-14 01:19:13.750365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:40.271 [2024-12-14 01:19:13.750378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.565 ms 00:19:40.271 [2024-12-14 01:19:13.750389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.271 [2024-12-14 01:19:13.750519] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:40.271 [2024-12-14 01:19:13.750835] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:40.271 [2024-12-14 01:19:13.750854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.271 [2024-12-14 01:19:13.750864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:40.271 [2024-12-14 01:19:13.750878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:19:40.271 [2024-12-14 01:19:13.750891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.271 [2024-12-14 01:19:13.752739] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:40.271 [2024-12-14 01:19:13.756703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.271 [2024-12-14 01:19:13.756755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:40.271 [2024-12-14 01:19:13.756769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.962 ms 00:19:40.271 [2024-12-14 01:19:13.756777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.271 [2024-12-14 01:19:13.756862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.272 [2024-12-14 01:19:13.756874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:40.272 [2024-12-14 01:19:13.756888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:40.272 [2024-12-14 01:19:13.756896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.272 [2024-12-14 01:19:13.765021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.272 [2024-12-14 01:19:13.765065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:40.272 [2024-12-14 01:19:13.765082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.069 ms 00:19:40.272 [2024-12-14 01:19:13.765096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.272 [2024-12-14 01:19:13.765233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.272 [2024-12-14 01:19:13.765245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:40.272 [2024-12-14 01:19:13.765258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:19:40.272 [2024-12-14 01:19:13.765271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.272 [2024-12-14 01:19:13.765297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.272 [2024-12-14 01:19:13.765306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:40.272 [2024-12-14 01:19:13.765319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:40.272 [2024-12-14 01:19:13.765326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.272 [2024-12-14 01:19:13.765351] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:40.272 [2024-12-14 01:19:13.767660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.272 [2024-12-14 01:19:13.767821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:40.272 [2024-12-14 01:19:13.767896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.316 ms 00:19:40.272 [2024-12-14 01:19:13.767926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.272 [2024-12-14 01:19:13.767988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.272 [2024-12-14 01:19:13.768015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:40.272 [2024-12-14 01:19:13.768035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:40.272 [2024-12-14 01:19:13.768057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.272 [2024-12-14 01:19:13.768090] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:40.272 [2024-12-14 01:19:13.768133] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:40.272 [2024-12-14 01:19:13.768254] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:40.272 [2024-12-14 01:19:13.768302] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:40.272 [2024-12-14 01:19:13.768439] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:40.272 [2024-12-14 01:19:13.768480] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:40.272 [2024-12-14 01:19:13.768514] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:40.272 [2024-12-14 01:19:13.768598] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:40.272 [2024-12-14 01:19:13.768614] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:40.272 [2024-12-14 01:19:13.768645] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:40.272 [2024-12-14 01:19:13.768653] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:40.272 [2024-12-14 01:19:13.768665] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:40.272 [2024-12-14 01:19:13.768677] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:40.272 [2024-12-14 01:19:13.768689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.272 [2024-12-14 01:19:13.768698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:40.272 [2024-12-14 01:19:13.768709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:19:40.272 [2024-12-14 01:19:13.768721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.272 [2024-12-14 01:19:13.768821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.272 [2024-12-14 01:19:13.768832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:40.272 [2024-12-14 01:19:13.768849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:40.272 [2024-12-14 01:19:13.768857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.272 [2024-12-14 01:19:13.768966] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:40.272 [2024-12-14 01:19:13.768980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:40.272 [2024-12-14 01:19:13.768991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:40.272 [2024-12-14 01:19:13.769007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.272 [2024-12-14 01:19:13.769020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:40.272 [2024-12-14 01:19:13.769027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:40.272 [2024-12-14 01:19:13.769037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:40.272 [2024-12-14 01:19:13.769047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:40.272 [2024-12-14 01:19:13.769056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:40.272 [2024-12-14 01:19:13.769064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:40.272 [2024-12-14 01:19:13.769073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:40.272 [2024-12-14 01:19:13.769082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:40.272 [2024-12-14 01:19:13.769090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:40.272 [2024-12-14 01:19:13.769098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:40.272 [2024-12-14 01:19:13.769107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:40.272 [2024-12-14 01:19:13.769114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.272 [2024-12-14 01:19:13.769124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:40.272 [2024-12-14 01:19:13.769131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:40.272 [2024-12-14 01:19:13.769139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.272 [2024-12-14 01:19:13.769146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:40.272 [2024-12-14 01:19:13.769157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:40.272 [2024-12-14 01:19:13.769164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:40.272 [2024-12-14 01:19:13.769173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:40.272 [2024-12-14 01:19:13.769184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:40.272 [2024-12-14 01:19:13.769193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:40.272 [2024-12-14 01:19:13.769202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:40.272 [2024-12-14 01:19:13.769214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:40.272 [2024-12-14 01:19:13.769222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:40.272 [2024-12-14 01:19:13.769233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:40.272 [2024-12-14 01:19:13.769246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:40.272 [2024-12-14 01:19:13.769257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:40.272 [2024-12-14 01:19:13.769264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:40.272 [2024-12-14 01:19:13.769274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:40.272 [2024-12-14 01:19:13.769282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:40.272 [2024-12-14 01:19:13.769293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:40.272 [2024-12-14 01:19:13.769301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:40.272 [2024-12-14 01:19:13.769312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:40.272 [2024-12-14 01:19:13.769320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:40.272 [2024-12-14 01:19:13.769331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:40.272 [2024-12-14 01:19:13.769339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.272 [2024-12-14 01:19:13.769349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:40.272 [2024-12-14 01:19:13.769356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:40.272 [2024-12-14 01:19:13.769367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.272 [2024-12-14 01:19:13.769375] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:40.272 [2024-12-14 01:19:13.769386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:40.272 [2024-12-14 01:19:13.769413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:40.272 [2024-12-14 01:19:13.769424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.272 [2024-12-14 01:19:13.769434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:40.272 [2024-12-14 01:19:13.769444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:40.272 [2024-12-14 01:19:13.769451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:40.272 [2024-12-14 01:19:13.769461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:40.272 [2024-12-14 01:19:13.769471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:40.272 [2024-12-14 01:19:13.769484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:40.272 [2024-12-14 01:19:13.769495] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:40.272 [2024-12-14 01:19:13.769508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:40.272 [2024-12-14 01:19:13.769522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:40.272 [2024-12-14 01:19:13.769532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:40.272 [2024-12-14 01:19:13.769541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:40.272 [2024-12-14 01:19:13.769554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:40.272 [2024-12-14 01:19:13.769563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:40.273 [2024-12-14 01:19:13.769572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:40.273 [2024-12-14 01:19:13.769584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:40.273 [2024-12-14 01:19:13.769594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:40.273 [2024-12-14 01:19:13.769601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:40.273 [2024-12-14 01:19:13.769611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:40.273 [2024-12-14 01:19:13.769635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:40.273 [2024-12-14 01:19:13.769653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:40.273 [2024-12-14 01:19:13.769660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:40.273 [2024-12-14 01:19:13.769672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:40.273 [2024-12-14 01:19:13.769680] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:40.273 [2024-12-14 01:19:13.769693] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:40.273 [2024-12-14 01:19:13.769702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:40.273 [2024-12-14 01:19:13.769712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:40.273 [2024-12-14 01:19:13.769719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:40.273 [2024-12-14 01:19:13.769730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:40.273 [2024-12-14 01:19:13.769738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.273 [2024-12-14 01:19:13.769750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:40.273 [2024-12-14 01:19:13.769758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:19:40.273 [2024-12-14 01:19:13.769768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.273 [2024-12-14 01:19:13.783987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.273 [2024-12-14 01:19:13.784161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:40.273 [2024-12-14 01:19:13.784233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.132 ms 00:19:40.273 [2024-12-14 01:19:13.784259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.273 [2024-12-14 01:19:13.784405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.273 [2024-12-14 01:19:13.784444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:40.273 [2024-12-14 01:19:13.784467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:40.273 [2024-12-14 01:19:13.784490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.273 [2024-12-14 01:19:13.796463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.273 [2024-12-14 01:19:13.796666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:40.273 [2024-12-14 01:19:13.796837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.884 ms 00:19:40.273 [2024-12-14 01:19:13.796889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.273 [2024-12-14 01:19:13.796977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.273 [2024-12-14 01:19:13.797004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:40.273 [2024-12-14 01:19:13.797090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:40.273 [2024-12-14 01:19:13.797117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.273 [2024-12-14 01:19:13.797644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.273 [2024-12-14 01:19:13.797790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:40.273 [2024-12-14 01:19:13.797813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.492 ms 00:19:40.273 [2024-12-14 01:19:13.797897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.273 [2024-12-14 01:19:13.798067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.273 [2024-12-14 01:19:13.798096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:40.273 [2024-12-14 01:19:13.798118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:19:40.273 [2024-12-14 01:19:13.798139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.273 [2024-12-14 01:19:13.806115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.273 [2024-12-14 01:19:13.806278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:40.273 [2024-12-14 01:19:13.806331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.930 ms 00:19:40.273 [2024-12-14 01:19:13.806355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.273 [2024-12-14 01:19:13.819220] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:40.273 [2024-12-14 01:19:13.819426] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:40.273 [2024-12-14 01:19:13.819502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.273 [2024-12-14 01:19:13.819528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:40.273 [2024-12-14 01:19:13.819550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.027 ms 00:19:40.273 [2024-12-14 01:19:13.819573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.273 [2024-12-14 01:19:13.835359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.273 [2024-12-14 01:19:13.835549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:40.273 [2024-12-14 01:19:13.835609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.707 ms 00:19:40.273 [2024-12-14 01:19:13.835659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.273 [2024-12-14 01:19:13.838584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.273 [2024-12-14 01:19:13.838766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:40.273 [2024-12-14 01:19:13.838824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.747 ms 00:19:40.273 [2024-12-14 01:19:13.838849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.273 [2024-12-14 01:19:13.841758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.273 [2024-12-14 01:19:13.841961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:40.273 [2024-12-14 01:19:13.841981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.564 ms 00:19:40.273 [2024-12-14 01:19:13.841992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.273 [2024-12-14 01:19:13.842336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.273 [2024-12-14 01:19:13.842360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:40.273 [2024-12-14 01:19:13.842372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:19:40.273 [2024-12-14 01:19:13.842383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.273 [2024-12-14 01:19:13.865934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.273 [2024-12-14 01:19:13.866006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:40.273 [2024-12-14 01:19:13.866019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.526 ms 00:19:40.273 [2024-12-14 01:19:13.866033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.273 [2024-12-14 01:19:13.873973] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:40.535 [2024-12-14 01:19:13.891756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.535 [2024-12-14 01:19:13.891986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:40.535 [2024-12-14 01:19:13.892012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.621 ms 00:19:40.535 [2024-12-14 01:19:13.892021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.535 [2024-12-14 01:19:13.892109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.535 [2024-12-14 01:19:13.892128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:40.535 [2024-12-14 01:19:13.892139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:40.535 [2024-12-14 01:19:13.892148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.535 [2024-12-14 01:19:13.892206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.535 [2024-12-14 01:19:13.892219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:40.535 [2024-12-14 01:19:13.892230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:40.535 [2024-12-14 01:19:13.892238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.535 [2024-12-14 01:19:13.892271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.535 [2024-12-14 01:19:13.892280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:40.535 [2024-12-14 01:19:13.892298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:40.535 [2024-12-14 01:19:13.892306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.535 [2024-12-14 01:19:13.892344] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:40.535 [2024-12-14 01:19:13.892356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.535 [2024-12-14 01:19:13.892366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:40.535 [2024-12-14 01:19:13.892374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:40.535 [2024-12-14 01:19:13.892384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.535 [2024-12-14 01:19:13.898222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.535 [2024-12-14 01:19:13.898274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:40.535 [2024-12-14 01:19:13.898285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.815 ms 00:19:40.535 [2024-12-14 01:19:13.898299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.535 [2024-12-14 01:19:13.898386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.535 [2024-12-14 01:19:13.898398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:40.535 [2024-12-14 01:19:13.898407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:40.535 [2024-12-14 01:19:13.898417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.535 [2024-12-14 01:19:13.899460] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:40.535 [2024-12-14 01:19:13.900845] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 151.751 ms, result 0 00:19:40.535 [2024-12-14 01:19:13.902569] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:40.535 Some configs were skipped because the RPC state that can call them passed over. 00:19:40.535 01:19:13 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:40.535 [2024-12-14 01:19:14.128711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.535 [2024-12-14 01:19:14.128892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:40.535 [2024-12-14 01:19:14.128963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.235 ms 00:19:40.535 [2024-12-14 01:19:14.128989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.535 [2024-12-14 01:19:14.129048] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.579 ms, result 0 00:19:40.535 true 00:19:40.796 01:19:14 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:40.796 [2024-12-14 01:19:14.356520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.796 [2024-12-14 01:19:14.356733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:40.796 [2024-12-14 01:19:14.357237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.812 ms 00:19:40.796 [2024-12-14 01:19:14.357272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.796 [2024-12-14 01:19:14.357347] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.629 ms, result 0 00:19:40.796 true 00:19:40.796 01:19:14 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 89448 00:19:40.796 01:19:14 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89448 ']' 00:19:40.796 01:19:14 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89448 00:19:40.796 01:19:14 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:40.796 01:19:14 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.796 01:19:14 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89448 00:19:40.796 01:19:14 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:40.796 01:19:14 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:40.796 01:19:14 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89448' 00:19:40.796 killing process with pid 89448 00:19:40.796 01:19:14 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89448 00:19:40.796 01:19:14 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89448 00:19:41.059 [2024-12-14 01:19:14.527010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.059 [2024-12-14 01:19:14.527171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:41.059 [2024-12-14 01:19:14.527239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:41.059 [2024-12-14 01:19:14.527264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.059 [2024-12-14 01:19:14.527309] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:41.059 [2024-12-14 01:19:14.527810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.059 [2024-12-14 01:19:14.527870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:41.059 [2024-12-14 01:19:14.527895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:19:41.059 [2024-12-14 01:19:14.527915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.059 [2024-12-14 01:19:14.528347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.059 [2024-12-14 01:19:14.528399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:41.059 [2024-12-14 01:19:14.528421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:19:41.059 [2024-12-14 01:19:14.528447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.059 [2024-12-14 01:19:14.532983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.059 [2024-12-14 01:19:14.533107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:41.059 [2024-12-14 01:19:14.533160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.506 ms 00:19:41.059 [2024-12-14 01:19:14.533188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.059 [2024-12-14 01:19:14.540109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.059 [2024-12-14 01:19:14.540174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:41.059 [2024-12-14 01:19:14.540200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.870 ms 00:19:41.059 [2024-12-14 01:19:14.540223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.059 [2024-12-14 01:19:14.542915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.059 [2024-12-14 01:19:14.543036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:41.059 [2024-12-14 01:19:14.543086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.609 ms 00:19:41.059 [2024-12-14 01:19:14.543110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.059 [2024-12-14 01:19:14.547070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.059 [2024-12-14 01:19:14.547195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:41.059 [2024-12-14 01:19:14.547245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.869 ms 00:19:41.059 [2024-12-14 01:19:14.547273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.059 [2024-12-14 01:19:14.547436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.059 [2024-12-14 01:19:14.547479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:41.059 [2024-12-14 01:19:14.547502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:19:41.059 [2024-12-14 01:19:14.547562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.059 [2024-12-14 01:19:14.550512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.059 [2024-12-14 01:19:14.550646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:41.059 [2024-12-14 01:19:14.550698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.915 ms 00:19:41.059 [2024-12-14 01:19:14.550725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.059 [2024-12-14 01:19:14.552974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.059 [2024-12-14 01:19:14.553088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:41.059 [2024-12-14 01:19:14.553102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.169 ms 00:19:41.059 [2024-12-14 01:19:14.553111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.059 [2024-12-14 01:19:14.554688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.059 [2024-12-14 01:19:14.554727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:41.059 [2024-12-14 01:19:14.554736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.543 ms 00:19:41.059 [2024-12-14 01:19:14.554745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.059 [2024-12-14 01:19:14.556679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.059 [2024-12-14 01:19:14.556717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:41.059 [2024-12-14 01:19:14.556725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.870 ms 00:19:41.059 [2024-12-14 01:19:14.556734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.059 [2024-12-14 01:19:14.556770] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:41.059 [2024-12-14 01:19:14.556787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.556993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:41.059 [2024-12-14 01:19:14.557421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:41.060 [2024-12-14 01:19:14.557691] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:41.060 [2024-12-14 01:19:14.557698] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6143b464-b06c-4534-b843-f22084791cb4 00:19:41.060 [2024-12-14 01:19:14.557708] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:41.060 [2024-12-14 01:19:14.557719] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:41.060 [2024-12-14 01:19:14.557761] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:41.060 [2024-12-14 01:19:14.557769] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:41.060 [2024-12-14 01:19:14.557778] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:41.060 [2024-12-14 01:19:14.557802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:41.060 [2024-12-14 01:19:14.557811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:41.060 [2024-12-14 01:19:14.557818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:41.060 [2024-12-14 01:19:14.557826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:41.060 [2024-12-14 01:19:14.557833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.060 [2024-12-14 01:19:14.557842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:41.060 [2024-12-14 01:19:14.557851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.064 ms 00:19:41.060 [2024-12-14 01:19:14.557861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-14 01:19:14.559489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.060 [2024-12-14 01:19:14.559517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:41.060 [2024-12-14 01:19:14.559527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.607 ms 00:19:41.060 [2024-12-14 01:19:14.559537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-14 01:19:14.559639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.060 [2024-12-14 01:19:14.559651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:41.060 [2024-12-14 01:19:14.559663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:19:41.060 [2024-12-14 01:19:14.559673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-14 01:19:14.565597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.060 [2024-12-14 01:19:14.565659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:41.060 [2024-12-14 01:19:14.565671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.060 [2024-12-14 01:19:14.565680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-14 01:19:14.565759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.060 [2024-12-14 01:19:14.565775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:41.060 [2024-12-14 01:19:14.565783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.060 [2024-12-14 01:19:14.565794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-14 01:19:14.565837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.060 [2024-12-14 01:19:14.565849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:41.060 [2024-12-14 01:19:14.565857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.060 [2024-12-14 01:19:14.565866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-14 01:19:14.565885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.060 [2024-12-14 01:19:14.565895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:41.060 [2024-12-14 01:19:14.565902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.060 [2024-12-14 01:19:14.565913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-14 01:19:14.576368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.060 [2024-12-14 01:19:14.576529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:41.060 [2024-12-14 01:19:14.576545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.060 [2024-12-14 01:19:14.576561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-14 01:19:14.584481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.060 [2024-12-14 01:19:14.584604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:41.060 [2024-12-14 01:19:14.584676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.060 [2024-12-14 01:19:14.584711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-14 01:19:14.584770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.060 [2024-12-14 01:19:14.584798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:41.060 [2024-12-14 01:19:14.584822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.060 [2024-12-14 01:19:14.584844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-14 01:19:14.584885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.060 [2024-12-14 01:19:14.584908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:41.060 [2024-12-14 01:19:14.584975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.060 [2024-12-14 01:19:14.585000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-14 01:19:14.585089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.060 [2024-12-14 01:19:14.585119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:41.060 [2024-12-14 01:19:14.585143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.060 [2024-12-14 01:19:14.585163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-14 01:19:14.585210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.060 [2024-12-14 01:19:14.585235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:41.060 [2024-12-14 01:19:14.585255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.060 [2024-12-14 01:19:14.585319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-14 01:19:14.585375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.060 [2024-12-14 01:19:14.585410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:41.060 [2024-12-14 01:19:14.585432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.060 [2024-12-14 01:19:14.585453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-14 01:19:14.585511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.060 [2024-12-14 01:19:14.585538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:41.060 [2024-12-14 01:19:14.585559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.060 [2024-12-14 01:19:14.585648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-14 01:19:14.585821] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.781 ms, result 0 00:19:41.321 01:19:14 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:41.321 [2024-12-14 01:19:14.832486] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:41.321 [2024-12-14 01:19:14.832796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89484 ] 00:19:41.582 [2024-12-14 01:19:14.991462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.582 [2024-12-14 01:19:15.019667] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.582 [2024-12-14 01:19:15.131129] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:41.582 [2024-12-14 01:19:15.131425] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:41.843 [2024-12-14 01:19:15.291150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.843 [2024-12-14 01:19:15.291360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:41.843 [2024-12-14 01:19:15.291445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:41.843 [2024-12-14 01:19:15.291470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.843 [2024-12-14 01:19:15.294083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.843 [2024-12-14 01:19:15.294262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:41.843 [2024-12-14 01:19:15.294323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.573 ms 00:19:41.843 [2024-12-14 01:19:15.294346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.843 [2024-12-14 01:19:15.294568] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:41.843 [2024-12-14 01:19:15.294948] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:41.843 [2024-12-14 01:19:15.294981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.843 [2024-12-14 01:19:15.294994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:41.843 [2024-12-14 01:19:15.295008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:19:41.843 [2024-12-14 01:19:15.295019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.843 [2024-12-14 01:19:15.296737] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:41.843 [2024-12-14 01:19:15.300659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.843 [2024-12-14 01:19:15.300708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:41.843 [2024-12-14 01:19:15.300724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.924 ms 00:19:41.843 [2024-12-14 01:19:15.300733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.843 [2024-12-14 01:19:15.300819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.843 [2024-12-14 01:19:15.300835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:41.843 [2024-12-14 01:19:15.300848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:41.843 [2024-12-14 01:19:15.300855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.843 [2024-12-14 01:19:15.309132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.843 [2024-12-14 01:19:15.309170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:41.843 [2024-12-14 01:19:15.309182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.230 ms 00:19:41.843 [2024-12-14 01:19:15.309190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.843 [2024-12-14 01:19:15.309335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.843 [2024-12-14 01:19:15.309348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:41.843 [2024-12-14 01:19:15.309360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:41.843 [2024-12-14 01:19:15.309370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.843 [2024-12-14 01:19:15.309437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.843 [2024-12-14 01:19:15.309454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:41.843 [2024-12-14 01:19:15.309462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:41.843 [2024-12-14 01:19:15.309470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.843 [2024-12-14 01:19:15.309496] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:41.843 [2024-12-14 01:19:15.311427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.843 [2024-12-14 01:19:15.311461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:41.843 [2024-12-14 01:19:15.311476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.936 ms 00:19:41.843 [2024-12-14 01:19:15.311485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.843 [2024-12-14 01:19:15.311529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.843 [2024-12-14 01:19:15.311546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:41.843 [2024-12-14 01:19:15.311555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:41.843 [2024-12-14 01:19:15.311563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.843 [2024-12-14 01:19:15.311581] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:41.843 [2024-12-14 01:19:15.311603] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:41.843 [2024-12-14 01:19:15.311667] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:41.843 [2024-12-14 01:19:15.311691] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:41.843 [2024-12-14 01:19:15.311795] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:41.843 [2024-12-14 01:19:15.311808] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:41.843 [2024-12-14 01:19:15.311819] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:41.843 [2024-12-14 01:19:15.311831] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:41.843 [2024-12-14 01:19:15.311842] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:41.843 [2024-12-14 01:19:15.311851] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:41.843 [2024-12-14 01:19:15.311861] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:41.843 [2024-12-14 01:19:15.311869] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:41.843 [2024-12-14 01:19:15.311879] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:41.843 [2024-12-14 01:19:15.311890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.844 [2024-12-14 01:19:15.311903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:41.844 [2024-12-14 01:19:15.311912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:19:41.844 [2024-12-14 01:19:15.311919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.844 [2024-12-14 01:19:15.312008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.844 [2024-12-14 01:19:15.312018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:41.844 [2024-12-14 01:19:15.312026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:41.844 [2024-12-14 01:19:15.312034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.844 [2024-12-14 01:19:15.312136] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:41.844 [2024-12-14 01:19:15.312149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:41.844 [2024-12-14 01:19:15.312161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:41.844 [2024-12-14 01:19:15.312169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.844 [2024-12-14 01:19:15.312184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:41.844 [2024-12-14 01:19:15.312194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:41.844 [2024-12-14 01:19:15.312202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:41.844 [2024-12-14 01:19:15.312217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:41.844 [2024-12-14 01:19:15.312227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:41.844 [2024-12-14 01:19:15.312235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:41.844 [2024-12-14 01:19:15.312245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:41.844 [2024-12-14 01:19:15.312255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:41.844 [2024-12-14 01:19:15.312263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:41.844 [2024-12-14 01:19:15.312272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:41.844 [2024-12-14 01:19:15.312282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:41.844 [2024-12-14 01:19:15.312290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.844 [2024-12-14 01:19:15.312298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:41.844 [2024-12-14 01:19:15.312307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:41.844 [2024-12-14 01:19:15.312316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.844 [2024-12-14 01:19:15.312325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:41.844 [2024-12-14 01:19:15.312335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:41.844 [2024-12-14 01:19:15.312345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:41.844 [2024-12-14 01:19:15.312353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:41.844 [2024-12-14 01:19:15.312366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:41.844 [2024-12-14 01:19:15.312377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:41.844 [2024-12-14 01:19:15.312388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:41.844 [2024-12-14 01:19:15.312396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:41.844 [2024-12-14 01:19:15.312405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:41.844 [2024-12-14 01:19:15.312413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:41.844 [2024-12-14 01:19:15.312421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:41.844 [2024-12-14 01:19:15.312431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:41.844 [2024-12-14 01:19:15.312440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:41.844 [2024-12-14 01:19:15.312448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:41.844 [2024-12-14 01:19:15.312455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:41.844 [2024-12-14 01:19:15.312463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:41.844 [2024-12-14 01:19:15.312470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:41.844 [2024-12-14 01:19:15.312477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:41.844 [2024-12-14 01:19:15.312483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:41.844 [2024-12-14 01:19:15.312492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:41.844 [2024-12-14 01:19:15.312501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.844 [2024-12-14 01:19:15.312508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:41.844 [2024-12-14 01:19:15.312516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:41.844 [2024-12-14 01:19:15.312524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.844 [2024-12-14 01:19:15.312530] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:41.844 [2024-12-14 01:19:15.312538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:41.844 [2024-12-14 01:19:15.312547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:41.844 [2024-12-14 01:19:15.312558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.844 [2024-12-14 01:19:15.312565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:41.844 [2024-12-14 01:19:15.312572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:41.844 [2024-12-14 01:19:15.312581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:41.844 [2024-12-14 01:19:15.312587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:41.844 [2024-12-14 01:19:15.312593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:41.844 [2024-12-14 01:19:15.312601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:41.844 [2024-12-14 01:19:15.312611] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:41.844 [2024-12-14 01:19:15.312639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:41.844 [2024-12-14 01:19:15.312654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:41.844 [2024-12-14 01:19:15.312661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:41.844 [2024-12-14 01:19:15.312670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:41.844 [2024-12-14 01:19:15.312679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:41.844 [2024-12-14 01:19:15.312687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:41.844 [2024-12-14 01:19:15.312694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:41.844 [2024-12-14 01:19:15.312702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:41.844 [2024-12-14 01:19:15.312716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:41.844 [2024-12-14 01:19:15.312724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:41.844 [2024-12-14 01:19:15.312732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:41.844 [2024-12-14 01:19:15.312739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:41.844 [2024-12-14 01:19:15.312747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:41.844 [2024-12-14 01:19:15.312755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:41.844 [2024-12-14 01:19:15.312763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:41.844 [2024-12-14 01:19:15.312769] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:41.844 [2024-12-14 01:19:15.312779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:41.844 [2024-12-14 01:19:15.312795] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:41.844 [2024-12-14 01:19:15.312802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:41.844 [2024-12-14 01:19:15.312810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:41.844 [2024-12-14 01:19:15.312818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:41.844 [2024-12-14 01:19:15.312826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.844 [2024-12-14 01:19:15.312833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:41.844 [2024-12-14 01:19:15.312842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:19:41.844 [2024-12-14 01:19:15.312850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.844 [2024-12-14 01:19:15.326790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.844 [2024-12-14 01:19:15.326827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:41.844 [2024-12-14 01:19:15.326839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.889 ms 00:19:41.844 [2024-12-14 01:19:15.326854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.844 [2024-12-14 01:19:15.326989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.844 [2024-12-14 01:19:15.327001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:41.844 [2024-12-14 01:19:15.327015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:41.844 [2024-12-14 01:19:15.327023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.844 [2024-12-14 01:19:15.355805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.844 [2024-12-14 01:19:15.355855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:41.844 [2024-12-14 01:19:15.355873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.757 ms 00:19:41.844 [2024-12-14 01:19:15.355885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.844 [2024-12-14 01:19:15.355982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.844 [2024-12-14 01:19:15.355995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:41.844 [2024-12-14 01:19:15.356004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:41.844 [2024-12-14 01:19:15.356013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.844 [2024-12-14 01:19:15.356530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.844 [2024-12-14 01:19:15.356568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:41.845 [2024-12-14 01:19:15.356580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.492 ms 00:19:41.845 [2024-12-14 01:19:15.356598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.845 [2024-12-14 01:19:15.356782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.845 [2024-12-14 01:19:15.356797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:41.845 [2024-12-14 01:19:15.356809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:19:41.845 [2024-12-14 01:19:15.356819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.845 [2024-12-14 01:19:15.365252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.845 [2024-12-14 01:19:15.365292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:41.845 [2024-12-14 01:19:15.365311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.408 ms 00:19:41.845 [2024-12-14 01:19:15.365322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.845 [2024-12-14 01:19:15.369278] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:41.845 [2024-12-14 01:19:15.369328] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:41.845 [2024-12-14 01:19:15.369341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.845 [2024-12-14 01:19:15.369350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:41.845 [2024-12-14 01:19:15.369359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.871 ms 00:19:41.845 [2024-12-14 01:19:15.369367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.845 [2024-12-14 01:19:15.385322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.845 [2024-12-14 01:19:15.385366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:41.845 [2024-12-14 01:19:15.385379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.856 ms 00:19:41.845 [2024-12-14 01:19:15.385389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.845 [2024-12-14 01:19:15.388393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.845 [2024-12-14 01:19:15.388435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:41.845 [2024-12-14 01:19:15.388445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.852 ms 00:19:41.845 [2024-12-14 01:19:15.388454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.845 [2024-12-14 01:19:15.391061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.845 [2024-12-14 01:19:15.391111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:41.845 [2024-12-14 01:19:15.391122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.550 ms 00:19:41.845 [2024-12-14 01:19:15.391129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.845 [2024-12-14 01:19:15.391475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.845 [2024-12-14 01:19:15.391489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:41.845 [2024-12-14 01:19:15.391499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:19:41.845 [2024-12-14 01:19:15.391507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.845 [2024-12-14 01:19:15.414312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.845 [2024-12-14 01:19:15.414363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:41.845 [2024-12-14 01:19:15.414376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.774 ms 00:19:41.845 [2024-12-14 01:19:15.414393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.845 [2024-12-14 01:19:15.422494] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:41.845 [2024-12-14 01:19:15.440783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.845 [2024-12-14 01:19:15.440831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:41.845 [2024-12-14 01:19:15.440845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.295 ms 00:19:41.845 [2024-12-14 01:19:15.440855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.845 [2024-12-14 01:19:15.440943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.845 [2024-12-14 01:19:15.440958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:41.845 [2024-12-14 01:19:15.440968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:41.845 [2024-12-14 01:19:15.440976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.845 [2024-12-14 01:19:15.441033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.845 [2024-12-14 01:19:15.441044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:41.845 [2024-12-14 01:19:15.441053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:41.845 [2024-12-14 01:19:15.441061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.845 [2024-12-14 01:19:15.441090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.845 [2024-12-14 01:19:15.441100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:41.845 [2024-12-14 01:19:15.441112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:41.845 [2024-12-14 01:19:15.441121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.845 [2024-12-14 01:19:15.441160] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:41.845 [2024-12-14 01:19:15.441174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.845 [2024-12-14 01:19:15.441184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:41.845 [2024-12-14 01:19:15.441192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:41.845 [2024-12-14 01:19:15.441201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.845 [2024-12-14 01:19:15.447096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.845 [2024-12-14 01:19:15.447139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:41.845 [2024-12-14 01:19:15.447151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.873 ms 00:19:41.845 [2024-12-14 01:19:15.447167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.845 [2024-12-14 01:19:15.447257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.845 [2024-12-14 01:19:15.447268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:41.845 [2024-12-14 01:19:15.447277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:41.845 [2024-12-14 01:19:15.447286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.845 [2024-12-14 01:19:15.448346] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:41.845 [2024-12-14 01:19:15.449716] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 156.883 ms, result 0 00:19:41.845 [2024-12-14 01:19:15.450905] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:42.106 [2024-12-14 01:19:15.458425] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:43.050  [2024-12-14T01:19:17.608Z] Copying: 14/256 [MB] (14 MBps) [2024-12-14T01:19:18.554Z] Copying: 25/256 [MB] (10 MBps) [2024-12-14T01:19:19.944Z] Copying: 35/256 [MB] (10 MBps) [2024-12-14T01:19:20.888Z] Copying: 46/256 [MB] (10 MBps) [2024-12-14T01:19:21.832Z] Copying: 57/256 [MB] (11 MBps) [2024-12-14T01:19:22.778Z] Copying: 68/256 [MB] (11 MBps) [2024-12-14T01:19:23.723Z] Copying: 82/256 [MB] (13 MBps) [2024-12-14T01:19:24.668Z] Copying: 93/256 [MB] (10 MBps) [2024-12-14T01:19:25.617Z] Copying: 103/256 [MB] (10 MBps) [2024-12-14T01:19:26.561Z] Copying: 115/256 [MB] (12 MBps) [2024-12-14T01:19:27.952Z] Copying: 132/256 [MB] (16 MBps) [2024-12-14T01:19:28.526Z] Copying: 156/256 [MB] (24 MBps) [2024-12-14T01:19:29.916Z] Copying: 174/256 [MB] (18 MBps) [2024-12-14T01:19:30.859Z] Copying: 196/256 [MB] (21 MBps) [2024-12-14T01:19:31.805Z] Copying: 218/256 [MB] (22 MBps) [2024-12-14T01:19:32.442Z] Copying: 238/256 [MB] (19 MBps) [2024-12-14T01:19:32.728Z] Copying: 256/256 [MB] (average 15 MBps)[2024-12-14 01:19:32.597550] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:59.116 [2024-12-14 01:19:32.600726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.116 [2024-12-14 01:19:32.600802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:59.116 [2024-12-14 01:19:32.600830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:59.116 [2024-12-14 01:19:32.600847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.116 [2024-12-14 01:19:32.600893] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:59.116 [2024-12-14 01:19:32.602013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.116 [2024-12-14 01:19:32.602083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:59.116 [2024-12-14 01:19:32.602106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.093 ms 00:19:59.116 [2024-12-14 01:19:32.602124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.116 [2024-12-14 01:19:32.602717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.116 [2024-12-14 01:19:32.603074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:59.116 [2024-12-14 01:19:32.603101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:19:59.116 [2024-12-14 01:19:32.603118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.116 [2024-12-14 01:19:32.608343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.116 [2024-12-14 01:19:32.608368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:59.116 [2024-12-14 01:19:32.608379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.189 ms 00:19:59.116 [2024-12-14 01:19:32.608396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.116 [2024-12-14 01:19:32.615417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.116 [2024-12-14 01:19:32.615470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:59.116 [2024-12-14 01:19:32.615491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.973 ms 00:19:59.116 [2024-12-14 01:19:32.615499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.116 [2024-12-14 01:19:32.619173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.116 [2024-12-14 01:19:32.619233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:59.116 [2024-12-14 01:19:32.619245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.606 ms 00:19:59.116 [2024-12-14 01:19:32.619253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.116 [2024-12-14 01:19:32.624145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.116 [2024-12-14 01:19:32.624204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:59.116 [2024-12-14 01:19:32.624216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.838 ms 00:19:59.116 [2024-12-14 01:19:32.624224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.116 [2024-12-14 01:19:32.624378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.116 [2024-12-14 01:19:32.624390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:59.116 [2024-12-14 01:19:32.624403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:19:59.116 [2024-12-14 01:19:32.624411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.116 [2024-12-14 01:19:32.627642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.116 [2024-12-14 01:19:32.627694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:59.116 [2024-12-14 01:19:32.627704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.188 ms 00:19:59.116 [2024-12-14 01:19:32.627711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.116 [2024-12-14 01:19:32.630817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.116 [2024-12-14 01:19:32.630869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:59.116 [2024-12-14 01:19:32.630879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.045 ms 00:19:59.116 [2024-12-14 01:19:32.630887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.116 [2024-12-14 01:19:32.633444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.116 [2024-12-14 01:19:32.633493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:59.116 [2024-12-14 01:19:32.633503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.510 ms 00:19:59.116 [2024-12-14 01:19:32.633511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.116 [2024-12-14 01:19:32.636060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.116 [2024-12-14 01:19:32.636113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:59.116 [2024-12-14 01:19:32.636124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.465 ms 00:19:59.116 [2024-12-14 01:19:32.636131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.116 [2024-12-14 01:19:32.636177] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:59.116 [2024-12-14 01:19:32.636193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:59.116 [2024-12-14 01:19:32.636203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:59.116 [2024-12-14 01:19:32.636212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:59.116 [2024-12-14 01:19:32.636220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:59.116 [2024-12-14 01:19:32.636228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:59.116 [2024-12-14 01:19:32.636236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:59.116 [2024-12-14 01:19:32.636245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:59.116 [2024-12-14 01:19:32.636253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:59.116 [2024-12-14 01:19:32.636262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:59.116 [2024-12-14 01:19:32.636269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:59.116 [2024-12-14 01:19:32.636276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:59.116 [2024-12-14 01:19:32.636284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:59.116 [2024-12-14 01:19:32.636292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:59.116 [2024-12-14 01:19:32.636299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:59.116 [2024-12-14 01:19:32.636306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:59.116 [2024-12-14 01:19:32.636314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:59.116 [2024-12-14 01:19:32.636322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.636992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.637000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:59.117 [2024-12-14 01:19:32.637007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:59.118 [2024-12-14 01:19:32.637015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:59.118 [2024-12-14 01:19:32.637023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:59.118 [2024-12-14 01:19:32.637030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:59.118 [2024-12-14 01:19:32.637047] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:59.118 [2024-12-14 01:19:32.637055] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6143b464-b06c-4534-b843-f22084791cb4 00:19:59.118 [2024-12-14 01:19:32.637068] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:59.118 [2024-12-14 01:19:32.637077] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:59.118 [2024-12-14 01:19:32.637084] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:59.118 [2024-12-14 01:19:32.637093] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:59.118 [2024-12-14 01:19:32.637101] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:59.118 [2024-12-14 01:19:32.637112] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:59.118 [2024-12-14 01:19:32.637120] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:59.118 [2024-12-14 01:19:32.637127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:59.118 [2024-12-14 01:19:32.637135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:59.118 [2024-12-14 01:19:32.637143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.118 [2024-12-14 01:19:32.637152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:59.118 [2024-12-14 01:19:32.637162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:19:59.118 [2024-12-14 01:19:32.637170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.118 [2024-12-14 01:19:32.639656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.118 [2024-12-14 01:19:32.639695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:59.118 [2024-12-14 01:19:32.639707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.466 ms 00:19:59.118 [2024-12-14 01:19:32.639724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.118 [2024-12-14 01:19:32.639873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.118 [2024-12-14 01:19:32.639884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:59.118 [2024-12-14 01:19:32.639895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:19:59.118 [2024-12-14 01:19:32.639902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.118 [2024-12-14 01:19:32.648179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.118 [2024-12-14 01:19:32.648233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:59.118 [2024-12-14 01:19:32.648252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.118 [2024-12-14 01:19:32.648261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.118 [2024-12-14 01:19:32.648332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.118 [2024-12-14 01:19:32.648341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:59.118 [2024-12-14 01:19:32.648350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.118 [2024-12-14 01:19:32.648358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.118 [2024-12-14 01:19:32.648418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.118 [2024-12-14 01:19:32.648436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:59.118 [2024-12-14 01:19:32.648445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.118 [2024-12-14 01:19:32.648456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.118 [2024-12-14 01:19:32.648475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.118 [2024-12-14 01:19:32.648484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:59.118 [2024-12-14 01:19:32.648493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.118 [2024-12-14 01:19:32.648506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.118 [2024-12-14 01:19:32.662814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.118 [2024-12-14 01:19:32.662872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:59.118 [2024-12-14 01:19:32.662892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.118 [2024-12-14 01:19:32.662901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.118 [2024-12-14 01:19:32.673051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.118 [2024-12-14 01:19:32.673102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:59.118 [2024-12-14 01:19:32.673114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.118 [2024-12-14 01:19:32.673122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.118 [2024-12-14 01:19:32.673171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.118 [2024-12-14 01:19:32.673180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:59.118 [2024-12-14 01:19:32.673189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.118 [2024-12-14 01:19:32.673197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.118 [2024-12-14 01:19:32.673235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.118 [2024-12-14 01:19:32.673244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:59.118 [2024-12-14 01:19:32.673252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.118 [2024-12-14 01:19:32.673261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.118 [2024-12-14 01:19:32.673341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.118 [2024-12-14 01:19:32.673352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:59.118 [2024-12-14 01:19:32.673367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.118 [2024-12-14 01:19:32.673380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.118 [2024-12-14 01:19:32.673428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.118 [2024-12-14 01:19:32.673440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:59.118 [2024-12-14 01:19:32.673448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.118 [2024-12-14 01:19:32.673456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.118 [2024-12-14 01:19:32.673498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.118 [2024-12-14 01:19:32.673508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:59.118 [2024-12-14 01:19:32.673517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.118 [2024-12-14 01:19:32.673525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.118 [2024-12-14 01:19:32.673574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.118 [2024-12-14 01:19:32.673584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:59.118 [2024-12-14 01:19:32.673592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.118 [2024-12-14 01:19:32.673600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.118 [2024-12-14 01:19:32.673767] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 73.031 ms, result 0 00:19:59.379 00:19:59.379 00:19:59.379 01:19:32 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:59.951 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:59.951 01:19:33 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:59.951 01:19:33 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:19:59.951 01:19:33 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:59.951 01:19:33 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:59.951 01:19:33 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:19:59.951 01:19:33 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:59.951 Process with pid 89448 is not found 00:19:59.951 01:19:33 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 89448 00:19:59.951 01:19:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89448 ']' 00:19:59.951 01:19:33 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89448 00:19:59.951 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89448) - No such process 00:19:59.951 01:19:33 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 89448 is not found' 00:19:59.951 00:19:59.951 real 1m6.283s 00:19:59.951 user 1m25.261s 00:19:59.951 sys 0m4.847s 00:19:59.951 01:19:33 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:59.951 ************************************ 00:19:59.951 END TEST ftl_trim 00:19:59.951 ************************************ 00:19:59.951 01:19:33 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:00.213 01:19:33 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:00.213 01:19:33 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:00.213 01:19:33 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:00.213 01:19:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:00.213 ************************************ 00:20:00.213 START TEST ftl_restore 00:20:00.213 ************************************ 00:20:00.213 01:19:33 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:00.213 * Looking for test storage... 00:20:00.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:00.213 01:19:33 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:00.213 01:19:33 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:20:00.213 01:19:33 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:00.213 01:19:33 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:00.213 01:19:33 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:00.213 01:19:33 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:00.213 01:19:33 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:00.213 01:19:33 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:00.213 01:19:33 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:00.213 01:19:33 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:00.213 01:19:33 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:00.213 01:19:33 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:00.213 01:19:33 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:00.213 01:19:33 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:00.213 01:19:33 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:00.213 01:19:33 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:00.213 01:19:33 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:00.213 01:19:33 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:00.213 01:19:33 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:00.213 01:19:33 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:00.213 01:19:33 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:00.213 01:19:33 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:00.213 01:19:33 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:00.213 01:19:33 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:00.214 01:19:33 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:00.214 01:19:33 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:00.214 01:19:33 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:00.214 01:19:33 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:00.214 01:19:33 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:00.214 01:19:33 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:00.214 01:19:33 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:00.214 01:19:33 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:00.214 01:19:33 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:00.214 01:19:33 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:00.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.214 --rc genhtml_branch_coverage=1 00:20:00.214 --rc genhtml_function_coverage=1 00:20:00.214 --rc genhtml_legend=1 00:20:00.214 --rc geninfo_all_blocks=1 00:20:00.214 --rc geninfo_unexecuted_blocks=1 00:20:00.214 00:20:00.214 ' 00:20:00.214 01:19:33 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:00.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.214 --rc genhtml_branch_coverage=1 00:20:00.214 --rc genhtml_function_coverage=1 00:20:00.214 --rc genhtml_legend=1 00:20:00.214 --rc geninfo_all_blocks=1 00:20:00.214 --rc geninfo_unexecuted_blocks=1 00:20:00.214 00:20:00.214 ' 00:20:00.214 01:19:33 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:00.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.214 --rc genhtml_branch_coverage=1 00:20:00.214 --rc genhtml_function_coverage=1 00:20:00.214 --rc genhtml_legend=1 00:20:00.214 --rc geninfo_all_blocks=1 00:20:00.214 --rc geninfo_unexecuted_blocks=1 00:20:00.214 00:20:00.214 ' 00:20:00.214 01:19:33 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:00.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.214 --rc genhtml_branch_coverage=1 00:20:00.214 --rc genhtml_function_coverage=1 00:20:00.214 --rc genhtml_legend=1 00:20:00.214 --rc geninfo_all_blocks=1 00:20:00.214 --rc geninfo_unexecuted_blocks=1 00:20:00.214 00:20:00.214 ' 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.KIiYytJquV 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=89742 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 89742 00:20:00.214 01:19:33 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 89742 ']' 00:20:00.214 01:19:33 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.214 01:19:33 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.214 01:19:33 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:00.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.214 01:19:33 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.214 01:19:33 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.214 01:19:33 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:00.476 [2024-12-14 01:19:33.878758] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:20:00.476 [2024-12-14 01:19:33.878915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89742 ] 00:20:00.476 [2024-12-14 01:19:34.021145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.476 [2024-12-14 01:19:34.050307] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.421 01:19:34 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.421 01:19:34 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:01.421 01:19:34 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:01.421 01:19:34 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:01.421 01:19:34 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:01.421 01:19:34 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:01.421 01:19:34 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:01.421 01:19:34 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:01.421 01:19:35 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:01.421 01:19:35 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:01.421 01:19:35 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:01.421 01:19:35 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:01.690 01:19:35 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:01.690 01:19:35 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:01.690 01:19:35 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:01.690 01:19:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:01.690 01:19:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:01.690 { 00:20:01.690 "name": "nvme0n1", 00:20:01.690 "aliases": [ 00:20:01.690 "a227f51f-6318-4a18-b57f-77f4ad39bc89" 00:20:01.690 ], 00:20:01.690 "product_name": "NVMe disk", 00:20:01.690 "block_size": 4096, 00:20:01.690 "num_blocks": 1310720, 00:20:01.690 "uuid": "a227f51f-6318-4a18-b57f-77f4ad39bc89", 00:20:01.690 "numa_id": -1, 00:20:01.690 "assigned_rate_limits": { 00:20:01.690 "rw_ios_per_sec": 0, 00:20:01.690 "rw_mbytes_per_sec": 0, 00:20:01.690 "r_mbytes_per_sec": 0, 00:20:01.690 "w_mbytes_per_sec": 0 00:20:01.690 }, 00:20:01.690 "claimed": true, 00:20:01.690 "claim_type": "read_many_write_one", 00:20:01.690 "zoned": false, 00:20:01.690 "supported_io_types": { 00:20:01.690 "read": true, 00:20:01.690 "write": true, 00:20:01.690 "unmap": true, 00:20:01.690 "flush": true, 00:20:01.690 "reset": true, 00:20:01.690 "nvme_admin": true, 00:20:01.691 "nvme_io": true, 00:20:01.691 "nvme_io_md": false, 00:20:01.691 "write_zeroes": true, 00:20:01.691 "zcopy": false, 00:20:01.691 "get_zone_info": false, 00:20:01.691 "zone_management": false, 00:20:01.691 "zone_append": false, 00:20:01.691 "compare": true, 00:20:01.691 "compare_and_write": false, 00:20:01.691 "abort": true, 00:20:01.691 "seek_hole": false, 00:20:01.691 "seek_data": false, 00:20:01.691 "copy": true, 00:20:01.691 "nvme_iov_md": false 00:20:01.691 }, 00:20:01.691 "driver_specific": { 00:20:01.691 "nvme": [ 00:20:01.691 { 00:20:01.691 "pci_address": "0000:00:11.0", 00:20:01.691 "trid": { 00:20:01.691 "trtype": "PCIe", 00:20:01.691 "traddr": "0000:00:11.0" 00:20:01.691 }, 00:20:01.691 "ctrlr_data": { 00:20:01.691 "cntlid": 0, 00:20:01.691 "vendor_id": "0x1b36", 00:20:01.691 "model_number": "QEMU NVMe Ctrl", 00:20:01.691 "serial_number": "12341", 00:20:01.691 "firmware_revision": "8.0.0", 00:20:01.691 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:01.691 "oacs": { 00:20:01.691 "security": 0, 00:20:01.691 "format": 1, 00:20:01.691 "firmware": 0, 00:20:01.691 "ns_manage": 1 00:20:01.691 }, 00:20:01.691 "multi_ctrlr": false, 00:20:01.691 "ana_reporting": false 00:20:01.691 }, 00:20:01.691 "vs": { 00:20:01.691 "nvme_version": "1.4" 00:20:01.691 }, 00:20:01.691 "ns_data": { 00:20:01.691 "id": 1, 00:20:01.691 "can_share": false 00:20:01.691 } 00:20:01.691 } 00:20:01.691 ], 00:20:01.691 "mp_policy": "active_passive" 00:20:01.691 } 00:20:01.691 } 00:20:01.691 ]' 00:20:01.691 01:19:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:01.691 01:19:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:01.691 01:19:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:01.954 01:19:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:01.954 01:19:35 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:01.954 01:19:35 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:01.954 01:19:35 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:01.954 01:19:35 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:01.954 01:19:35 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:01.954 01:19:35 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:01.954 01:19:35 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:01.954 01:19:35 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=6dae33e5-23ba-4182-a602-bfbd5f907e6b 00:20:01.954 01:19:35 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:01.954 01:19:35 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6dae33e5-23ba-4182-a602-bfbd5f907e6b 00:20:02.216 01:19:35 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:02.478 01:19:36 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=8c4a9c13-655c-4aa2-b846-c38fb1f62718 00:20:02.478 01:19:36 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8c4a9c13-655c-4aa2-b846-c38fb1f62718 00:20:02.738 01:19:36 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=f5b2202f-3ac3-47e0-bfb6-78dfbf7fd5cb 00:20:02.738 01:19:36 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:02.738 01:19:36 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f5b2202f-3ac3-47e0-bfb6-78dfbf7fd5cb 00:20:02.738 01:19:36 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:02.738 01:19:36 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:02.738 01:19:36 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=f5b2202f-3ac3-47e0-bfb6-78dfbf7fd5cb 00:20:02.738 01:19:36 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:02.738 01:19:36 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size f5b2202f-3ac3-47e0-bfb6-78dfbf7fd5cb 00:20:02.738 01:19:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=f5b2202f-3ac3-47e0-bfb6-78dfbf7fd5cb 00:20:02.738 01:19:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:02.738 01:19:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:02.738 01:19:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:02.738 01:19:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f5b2202f-3ac3-47e0-bfb6-78dfbf7fd5cb 00:20:02.999 01:19:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:02.999 { 00:20:02.999 "name": "f5b2202f-3ac3-47e0-bfb6-78dfbf7fd5cb", 00:20:02.999 "aliases": [ 00:20:02.999 "lvs/nvme0n1p0" 00:20:02.999 ], 00:20:02.999 "product_name": "Logical Volume", 00:20:02.999 "block_size": 4096, 00:20:02.999 "num_blocks": 26476544, 00:20:02.999 "uuid": "f5b2202f-3ac3-47e0-bfb6-78dfbf7fd5cb", 00:20:02.999 "assigned_rate_limits": { 00:20:02.999 "rw_ios_per_sec": 0, 00:20:02.999 "rw_mbytes_per_sec": 0, 00:20:02.999 "r_mbytes_per_sec": 0, 00:20:02.999 "w_mbytes_per_sec": 0 00:20:02.999 }, 00:20:02.999 "claimed": false, 00:20:02.999 "zoned": false, 00:20:02.999 "supported_io_types": { 00:20:02.999 "read": true, 00:20:02.999 "write": true, 00:20:02.999 "unmap": true, 00:20:02.999 "flush": false, 00:20:02.999 "reset": true, 00:20:02.999 "nvme_admin": false, 00:20:02.999 "nvme_io": false, 00:20:02.999 "nvme_io_md": false, 00:20:02.999 "write_zeroes": true, 00:20:02.999 "zcopy": false, 00:20:02.999 "get_zone_info": false, 00:20:02.999 "zone_management": false, 00:20:02.999 "zone_append": false, 00:20:02.999 "compare": false, 00:20:02.999 "compare_and_write": false, 00:20:02.999 "abort": false, 00:20:02.999 "seek_hole": true, 00:20:02.999 "seek_data": true, 00:20:02.999 "copy": false, 00:20:02.999 "nvme_iov_md": false 00:20:02.999 }, 00:20:02.999 "driver_specific": { 00:20:02.999 "lvol": { 00:20:02.999 "lvol_store_uuid": "8c4a9c13-655c-4aa2-b846-c38fb1f62718", 00:20:02.999 "base_bdev": "nvme0n1", 00:20:02.999 "thin_provision": true, 00:20:02.999 "num_allocated_clusters": 0, 00:20:03.000 "snapshot": false, 00:20:03.000 "clone": false, 00:20:03.000 "esnap_clone": false 00:20:03.000 } 00:20:03.000 } 00:20:03.000 } 00:20:03.000 ]' 00:20:03.000 01:19:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:03.000 01:19:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:03.000 01:19:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:03.000 01:19:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:03.000 01:19:36 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:03.000 01:19:36 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:03.000 01:19:36 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:03.000 01:19:36 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:03.000 01:19:36 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:03.261 01:19:36 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:03.261 01:19:36 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:03.261 01:19:36 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size f5b2202f-3ac3-47e0-bfb6-78dfbf7fd5cb 00:20:03.261 01:19:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=f5b2202f-3ac3-47e0-bfb6-78dfbf7fd5cb 00:20:03.261 01:19:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:03.261 01:19:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:03.261 01:19:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:03.261 01:19:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f5b2202f-3ac3-47e0-bfb6-78dfbf7fd5cb 00:20:03.523 01:19:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:03.523 { 00:20:03.523 "name": "f5b2202f-3ac3-47e0-bfb6-78dfbf7fd5cb", 00:20:03.523 "aliases": [ 00:20:03.523 "lvs/nvme0n1p0" 00:20:03.523 ], 00:20:03.523 "product_name": "Logical Volume", 00:20:03.523 "block_size": 4096, 00:20:03.523 "num_blocks": 26476544, 00:20:03.523 "uuid": "f5b2202f-3ac3-47e0-bfb6-78dfbf7fd5cb", 00:20:03.523 "assigned_rate_limits": { 00:20:03.523 "rw_ios_per_sec": 0, 00:20:03.523 "rw_mbytes_per_sec": 0, 00:20:03.523 "r_mbytes_per_sec": 0, 00:20:03.523 "w_mbytes_per_sec": 0 00:20:03.523 }, 00:20:03.523 "claimed": false, 00:20:03.523 "zoned": false, 00:20:03.523 "supported_io_types": { 00:20:03.523 "read": true, 00:20:03.523 "write": true, 00:20:03.523 "unmap": true, 00:20:03.523 "flush": false, 00:20:03.523 "reset": true, 00:20:03.523 "nvme_admin": false, 00:20:03.523 "nvme_io": false, 00:20:03.523 "nvme_io_md": false, 00:20:03.523 "write_zeroes": true, 00:20:03.523 "zcopy": false, 00:20:03.523 "get_zone_info": false, 00:20:03.523 "zone_management": false, 00:20:03.523 "zone_append": false, 00:20:03.524 "compare": false, 00:20:03.524 "compare_and_write": false, 00:20:03.524 "abort": false, 00:20:03.524 "seek_hole": true, 00:20:03.524 "seek_data": true, 00:20:03.524 "copy": false, 00:20:03.524 "nvme_iov_md": false 00:20:03.524 }, 00:20:03.524 "driver_specific": { 00:20:03.524 "lvol": { 00:20:03.524 "lvol_store_uuid": "8c4a9c13-655c-4aa2-b846-c38fb1f62718", 00:20:03.524 "base_bdev": "nvme0n1", 00:20:03.524 "thin_provision": true, 00:20:03.524 "num_allocated_clusters": 0, 00:20:03.524 "snapshot": false, 00:20:03.524 "clone": false, 00:20:03.524 "esnap_clone": false 00:20:03.524 } 00:20:03.524 } 00:20:03.524 } 00:20:03.524 ]' 00:20:03.524 01:19:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:03.524 01:19:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:03.524 01:19:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:03.524 01:19:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:03.524 01:19:37 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:03.524 01:19:37 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:03.524 01:19:37 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:03.524 01:19:37 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:03.785 01:19:37 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:03.785 01:19:37 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size f5b2202f-3ac3-47e0-bfb6-78dfbf7fd5cb 00:20:03.785 01:19:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=f5b2202f-3ac3-47e0-bfb6-78dfbf7fd5cb 00:20:03.785 01:19:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:03.785 01:19:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:03.785 01:19:37 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:03.785 01:19:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f5b2202f-3ac3-47e0-bfb6-78dfbf7fd5cb 00:20:04.047 01:19:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:04.047 { 00:20:04.047 "name": "f5b2202f-3ac3-47e0-bfb6-78dfbf7fd5cb", 00:20:04.047 "aliases": [ 00:20:04.047 "lvs/nvme0n1p0" 00:20:04.047 ], 00:20:04.047 "product_name": "Logical Volume", 00:20:04.047 "block_size": 4096, 00:20:04.047 "num_blocks": 26476544, 00:20:04.047 "uuid": "f5b2202f-3ac3-47e0-bfb6-78dfbf7fd5cb", 00:20:04.047 "assigned_rate_limits": { 00:20:04.047 "rw_ios_per_sec": 0, 00:20:04.047 "rw_mbytes_per_sec": 0, 00:20:04.047 "r_mbytes_per_sec": 0, 00:20:04.047 "w_mbytes_per_sec": 0 00:20:04.047 }, 00:20:04.047 "claimed": false, 00:20:04.047 "zoned": false, 00:20:04.047 "supported_io_types": { 00:20:04.047 "read": true, 00:20:04.047 "write": true, 00:20:04.047 "unmap": true, 00:20:04.047 "flush": false, 00:20:04.047 "reset": true, 00:20:04.047 "nvme_admin": false, 00:20:04.047 "nvme_io": false, 00:20:04.047 "nvme_io_md": false, 00:20:04.047 "write_zeroes": true, 00:20:04.047 "zcopy": false, 00:20:04.047 "get_zone_info": false, 00:20:04.047 "zone_management": false, 00:20:04.047 "zone_append": false, 00:20:04.047 "compare": false, 00:20:04.047 "compare_and_write": false, 00:20:04.047 "abort": false, 00:20:04.047 "seek_hole": true, 00:20:04.047 "seek_data": true, 00:20:04.047 "copy": false, 00:20:04.047 "nvme_iov_md": false 00:20:04.047 }, 00:20:04.047 "driver_specific": { 00:20:04.047 "lvol": { 00:20:04.047 "lvol_store_uuid": "8c4a9c13-655c-4aa2-b846-c38fb1f62718", 00:20:04.047 "base_bdev": "nvme0n1", 00:20:04.047 "thin_provision": true, 00:20:04.047 "num_allocated_clusters": 0, 00:20:04.047 "snapshot": false, 00:20:04.047 "clone": false, 00:20:04.047 "esnap_clone": false 00:20:04.047 } 00:20:04.047 } 00:20:04.047 } 00:20:04.047 ]' 00:20:04.047 01:19:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:04.047 01:19:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:04.047 01:19:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:04.047 01:19:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:04.047 01:19:37 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:04.047 01:19:37 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:04.047 01:19:37 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:04.047 01:19:37 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d f5b2202f-3ac3-47e0-bfb6-78dfbf7fd5cb --l2p_dram_limit 10' 00:20:04.047 01:19:37 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:04.047 01:19:37 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:04.047 01:19:37 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:04.047 01:19:37 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:04.047 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:04.047 01:19:37 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f5b2202f-3ac3-47e0-bfb6-78dfbf7fd5cb --l2p_dram_limit 10 -c nvc0n1p0 00:20:04.310 [2024-12-14 01:19:37.717371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.310 [2024-12-14 01:19:37.717418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:04.310 [2024-12-14 01:19:37.717431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:04.310 [2024-12-14 01:19:37.717439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.310 [2024-12-14 01:19:37.717483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.310 [2024-12-14 01:19:37.717492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:04.310 [2024-12-14 01:19:37.717500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:04.310 [2024-12-14 01:19:37.717510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.310 [2024-12-14 01:19:37.717525] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:04.310 [2024-12-14 01:19:37.717957] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:04.310 [2024-12-14 01:19:37.717991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.310 [2024-12-14 01:19:37.718000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:04.310 [2024-12-14 01:19:37.718007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:20:04.310 [2024-12-14 01:19:37.718014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.310 [2024-12-14 01:19:37.718100] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6e33b643-f118-434f-b82d-1788ad8d0b55 00:20:04.310 [2024-12-14 01:19:37.719058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.310 [2024-12-14 01:19:37.719087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:04.310 [2024-12-14 01:19:37.719098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:04.310 [2024-12-14 01:19:37.719104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.310 [2024-12-14 01:19:37.724083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.310 [2024-12-14 01:19:37.724109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:04.310 [2024-12-14 01:19:37.724119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.935 ms 00:20:04.310 [2024-12-14 01:19:37.724125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.310 [2024-12-14 01:19:37.724188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.311 [2024-12-14 01:19:37.724195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:04.311 [2024-12-14 01:19:37.724207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:04.311 [2024-12-14 01:19:37.724212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.311 [2024-12-14 01:19:37.724254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.311 [2024-12-14 01:19:37.724263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:04.311 [2024-12-14 01:19:37.724271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:04.311 [2024-12-14 01:19:37.724277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.311 [2024-12-14 01:19:37.724295] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:04.311 [2024-12-14 01:19:37.725648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.311 [2024-12-14 01:19:37.725673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:04.311 [2024-12-14 01:19:37.725680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.358 ms 00:20:04.311 [2024-12-14 01:19:37.725687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.311 [2024-12-14 01:19:37.725716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.311 [2024-12-14 01:19:37.725724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:04.311 [2024-12-14 01:19:37.725730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:04.311 [2024-12-14 01:19:37.725830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.311 [2024-12-14 01:19:37.725849] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:04.311 [2024-12-14 01:19:37.725970] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:04.311 [2024-12-14 01:19:37.725980] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:04.311 [2024-12-14 01:19:37.725991] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:04.311 [2024-12-14 01:19:37.726003] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:04.311 [2024-12-14 01:19:37.726011] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:04.311 [2024-12-14 01:19:37.726018] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:04.311 [2024-12-14 01:19:37.726025] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:04.311 [2024-12-14 01:19:37.726031] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:04.311 [2024-12-14 01:19:37.726037] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:04.311 [2024-12-14 01:19:37.726043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.311 [2024-12-14 01:19:37.726050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:04.311 [2024-12-14 01:19:37.726056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:20:04.311 [2024-12-14 01:19:37.726063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.311 [2024-12-14 01:19:37.726129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.311 [2024-12-14 01:19:37.726146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:04.311 [2024-12-14 01:19:37.726152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:04.311 [2024-12-14 01:19:37.726161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.311 [2024-12-14 01:19:37.726237] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:04.311 [2024-12-14 01:19:37.726246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:04.311 [2024-12-14 01:19:37.726253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:04.311 [2024-12-14 01:19:37.726259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.311 [2024-12-14 01:19:37.726265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:04.311 [2024-12-14 01:19:37.726272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:04.311 [2024-12-14 01:19:37.726277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:04.311 [2024-12-14 01:19:37.726284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:04.311 [2024-12-14 01:19:37.726290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:04.311 [2024-12-14 01:19:37.726296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:04.311 [2024-12-14 01:19:37.726303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:04.311 [2024-12-14 01:19:37.726310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:04.311 [2024-12-14 01:19:37.726316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:04.311 [2024-12-14 01:19:37.726325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:04.311 [2024-12-14 01:19:37.726331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:04.311 [2024-12-14 01:19:37.726339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.311 [2024-12-14 01:19:37.726345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:04.311 [2024-12-14 01:19:37.726352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:04.311 [2024-12-14 01:19:37.726358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.311 [2024-12-14 01:19:37.726366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:04.311 [2024-12-14 01:19:37.726372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:04.311 [2024-12-14 01:19:37.726379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:04.311 [2024-12-14 01:19:37.726385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:04.311 [2024-12-14 01:19:37.726393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:04.311 [2024-12-14 01:19:37.726399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:04.311 [2024-12-14 01:19:37.726406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:04.311 [2024-12-14 01:19:37.726412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:04.311 [2024-12-14 01:19:37.726419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:04.311 [2024-12-14 01:19:37.726425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:04.311 [2024-12-14 01:19:37.726436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:04.311 [2024-12-14 01:19:37.726441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:04.311 [2024-12-14 01:19:37.726449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:04.311 [2024-12-14 01:19:37.726455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:04.311 [2024-12-14 01:19:37.726462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:04.311 [2024-12-14 01:19:37.726468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:04.311 [2024-12-14 01:19:37.726475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:04.311 [2024-12-14 01:19:37.726481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:04.311 [2024-12-14 01:19:37.726488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:04.311 [2024-12-14 01:19:37.726494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:04.311 [2024-12-14 01:19:37.726501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.311 [2024-12-14 01:19:37.726508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:04.311 [2024-12-14 01:19:37.726515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:04.311 [2024-12-14 01:19:37.726521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.311 [2024-12-14 01:19:37.726528] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:04.311 [2024-12-14 01:19:37.726539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:04.311 [2024-12-14 01:19:37.726548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:04.311 [2024-12-14 01:19:37.726554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.311 [2024-12-14 01:19:37.726564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:04.311 [2024-12-14 01:19:37.726570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:04.311 [2024-12-14 01:19:37.726577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:04.311 [2024-12-14 01:19:37.726583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:04.311 [2024-12-14 01:19:37.726591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:04.311 [2024-12-14 01:19:37.726597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:04.311 [2024-12-14 01:19:37.726605] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:04.311 [2024-12-14 01:19:37.726613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:04.311 [2024-12-14 01:19:37.726634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:04.311 [2024-12-14 01:19:37.726641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:04.311 [2024-12-14 01:19:37.726649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:04.311 [2024-12-14 01:19:37.726656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:04.311 [2024-12-14 01:19:37.726664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:04.311 [2024-12-14 01:19:37.726670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:04.312 [2024-12-14 01:19:37.726679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:04.312 [2024-12-14 01:19:37.726686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:04.312 [2024-12-14 01:19:37.726693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:04.312 [2024-12-14 01:19:37.726699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:04.312 [2024-12-14 01:19:37.726707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:04.312 [2024-12-14 01:19:37.726712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:04.312 [2024-12-14 01:19:37.726718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:04.312 [2024-12-14 01:19:37.726724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:04.312 [2024-12-14 01:19:37.726730] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:04.312 [2024-12-14 01:19:37.726736] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:04.312 [2024-12-14 01:19:37.726745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:04.312 [2024-12-14 01:19:37.726752] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:04.312 [2024-12-14 01:19:37.726760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:04.312 [2024-12-14 01:19:37.726766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:04.312 [2024-12-14 01:19:37.726772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.312 [2024-12-14 01:19:37.726778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:04.312 [2024-12-14 01:19:37.726787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:20:04.312 [2024-12-14 01:19:37.726793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.312 [2024-12-14 01:19:37.726822] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:04.312 [2024-12-14 01:19:37.726829] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:07.618 [2024-12-14 01:19:40.831033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.618 [2024-12-14 01:19:40.831128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:07.618 [2024-12-14 01:19:40.831154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3104.189 ms 00:20:07.618 [2024-12-14 01:19:40.831164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.618 [2024-12-14 01:19:40.844796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.618 [2024-12-14 01:19:40.844860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:07.618 [2024-12-14 01:19:40.844877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.510 ms 00:20:07.618 [2024-12-14 01:19:40.844890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.618 [2024-12-14 01:19:40.845020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.618 [2024-12-14 01:19:40.845031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:07.618 [2024-12-14 01:19:40.845044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:07.618 [2024-12-14 01:19:40.845052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.618 [2024-12-14 01:19:40.857801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.618 [2024-12-14 01:19:40.857855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:07.618 [2024-12-14 01:19:40.857869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.706 ms 00:20:07.618 [2024-12-14 01:19:40.857881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.618 [2024-12-14 01:19:40.857916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.618 [2024-12-14 01:19:40.857925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:07.618 [2024-12-14 01:19:40.857936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:07.618 [2024-12-14 01:19:40.857944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.618 [2024-12-14 01:19:40.858514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.618 [2024-12-14 01:19:40.858551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:07.618 [2024-12-14 01:19:40.858566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:20:07.618 [2024-12-14 01:19:40.858576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.618 [2024-12-14 01:19:40.858730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.618 [2024-12-14 01:19:40.858742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:07.618 [2024-12-14 01:19:40.858754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:20:07.618 [2024-12-14 01:19:40.858767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.618 [2024-12-14 01:19:40.867224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.618 [2024-12-14 01:19:40.867275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:07.618 [2024-12-14 01:19:40.867289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.425 ms 00:20:07.618 [2024-12-14 01:19:40.867296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.618 [2024-12-14 01:19:40.885649] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:07.618 [2024-12-14 01:19:40.889744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.618 [2024-12-14 01:19:40.889799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:07.618 [2024-12-14 01:19:40.889813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.368 ms 00:20:07.618 [2024-12-14 01:19:40.889824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.618 [2024-12-14 01:19:40.981019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.618 [2024-12-14 01:19:40.981088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:07.618 [2024-12-14 01:19:40.981101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.146 ms 00:20:07.618 [2024-12-14 01:19:40.981123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.618 [2024-12-14 01:19:40.981335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.618 [2024-12-14 01:19:40.981350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:07.618 [2024-12-14 01:19:40.981359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:20:07.618 [2024-12-14 01:19:40.981369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.618 [2024-12-14 01:19:40.987187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.618 [2024-12-14 01:19:40.987246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:07.618 [2024-12-14 01:19:40.987262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.781 ms 00:20:07.618 [2024-12-14 01:19:40.987273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.618 [2024-12-14 01:19:40.992242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.618 [2024-12-14 01:19:40.992300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:07.618 [2024-12-14 01:19:40.992310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.922 ms 00:20:07.618 [2024-12-14 01:19:40.992320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.618 [2024-12-14 01:19:40.992693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.618 [2024-12-14 01:19:40.992709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:07.618 [2024-12-14 01:19:40.992719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:20:07.618 [2024-12-14 01:19:40.992734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.618 [2024-12-14 01:19:41.038577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.618 [2024-12-14 01:19:41.038651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:07.618 [2024-12-14 01:19:41.038667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.803 ms 00:20:07.618 [2024-12-14 01:19:41.038678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.618 [2024-12-14 01:19:41.045593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.618 [2024-12-14 01:19:41.045663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:07.618 [2024-12-14 01:19:41.045675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.855 ms 00:20:07.618 [2024-12-14 01:19:41.045685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.618 [2024-12-14 01:19:41.051402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.618 [2024-12-14 01:19:41.051455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:07.618 [2024-12-14 01:19:41.051465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.670 ms 00:20:07.618 [2024-12-14 01:19:41.051475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.618 [2024-12-14 01:19:41.057610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.618 [2024-12-14 01:19:41.057676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:07.618 [2024-12-14 01:19:41.057686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.090 ms 00:20:07.619 [2024-12-14 01:19:41.057699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.619 [2024-12-14 01:19:41.057749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.619 [2024-12-14 01:19:41.057761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:07.619 [2024-12-14 01:19:41.057771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:07.619 [2024-12-14 01:19:41.057782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.619 [2024-12-14 01:19:41.057853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.619 [2024-12-14 01:19:41.057867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:07.619 [2024-12-14 01:19:41.057879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:07.619 [2024-12-14 01:19:41.057889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.619 [2024-12-14 01:19:41.059016] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3341.137 ms, result 0 00:20:07.619 { 00:20:07.619 "name": "ftl0", 00:20:07.619 "uuid": "6e33b643-f118-434f-b82d-1788ad8d0b55" 00:20:07.619 } 00:20:07.619 01:19:41 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:07.619 01:19:41 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:07.881 01:19:41 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:07.881 01:19:41 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:07.881 [2024-12-14 01:19:41.436946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.881 [2024-12-14 01:19:41.436991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:07.881 [2024-12-14 01:19:41.437006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:07.881 [2024-12-14 01:19:41.437014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.881 [2024-12-14 01:19:41.437040] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:07.881 [2024-12-14 01:19:41.437536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.881 [2024-12-14 01:19:41.437567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:07.881 [2024-12-14 01:19:41.437577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:20:07.881 [2024-12-14 01:19:41.437586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.881 [2024-12-14 01:19:41.437850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.881 [2024-12-14 01:19:41.437864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:07.881 [2024-12-14 01:19:41.437875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:20:07.881 [2024-12-14 01:19:41.437885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.881 [2024-12-14 01:19:41.441117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.881 [2024-12-14 01:19:41.441138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:07.881 [2024-12-14 01:19:41.441147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.216 ms 00:20:07.881 [2024-12-14 01:19:41.441161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.881 [2024-12-14 01:19:41.447419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.881 [2024-12-14 01:19:41.447450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:07.881 [2024-12-14 01:19:41.447460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.242 ms 00:20:07.881 [2024-12-14 01:19:41.447475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.881 [2024-12-14 01:19:41.449752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.881 [2024-12-14 01:19:41.449793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:07.881 [2024-12-14 01:19:41.449801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.212 ms 00:20:07.881 [2024-12-14 01:19:41.449812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.881 [2024-12-14 01:19:41.454840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.881 [2024-12-14 01:19:41.454879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:07.881 [2024-12-14 01:19:41.454896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.995 ms 00:20:07.881 [2024-12-14 01:19:41.454908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.881 [2024-12-14 01:19:41.455029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.881 [2024-12-14 01:19:41.455043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:07.881 [2024-12-14 01:19:41.455052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:07.881 [2024-12-14 01:19:41.455060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.881 [2024-12-14 01:19:41.457337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.881 [2024-12-14 01:19:41.457371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:07.881 [2024-12-14 01:19:41.457381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.261 ms 00:20:07.881 [2024-12-14 01:19:41.457390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.881 [2024-12-14 01:19:41.459697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.881 [2024-12-14 01:19:41.459734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:07.881 [2024-12-14 01:19:41.459743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.270 ms 00:20:07.881 [2024-12-14 01:19:41.459752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.881 [2024-12-14 01:19:41.461690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.881 [2024-12-14 01:19:41.461724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:07.881 [2024-12-14 01:19:41.461732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.901 ms 00:20:07.881 [2024-12-14 01:19:41.461740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.881 [2024-12-14 01:19:41.463478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.881 [2024-12-14 01:19:41.463511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:07.881 [2024-12-14 01:19:41.463520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.683 ms 00:20:07.881 [2024-12-14 01:19:41.463528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.881 [2024-12-14 01:19:41.463557] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:07.881 [2024-12-14 01:19:41.463573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:07.881 [2024-12-14 01:19:41.463950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.463957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.463966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.463973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.463983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.463991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:07.882 [2024-12-14 01:19:41.464420] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:07.882 [2024-12-14 01:19:41.464428] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6e33b643-f118-434f-b82d-1788ad8d0b55 00:20:07.882 [2024-12-14 01:19:41.464438] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:07.882 [2024-12-14 01:19:41.464445] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:07.882 [2024-12-14 01:19:41.464453] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:07.882 [2024-12-14 01:19:41.464460] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:07.882 [2024-12-14 01:19:41.464470] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:07.882 [2024-12-14 01:19:41.464478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:07.882 [2024-12-14 01:19:41.464487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:07.882 [2024-12-14 01:19:41.464493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:07.882 [2024-12-14 01:19:41.464501] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:07.882 [2024-12-14 01:19:41.464508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.882 [2024-12-14 01:19:41.464516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:07.882 [2024-12-14 01:19:41.464524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.952 ms 00:20:07.882 [2024-12-14 01:19:41.464533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.882 [2024-12-14 01:19:41.466022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.882 [2024-12-14 01:19:41.466054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:07.882 [2024-12-14 01:19:41.466065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.473 ms 00:20:07.882 [2024-12-14 01:19:41.466074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.882 [2024-12-14 01:19:41.466171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.882 [2024-12-14 01:19:41.466182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:07.882 [2024-12-14 01:19:41.466190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:07.882 [2024-12-14 01:19:41.466199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.882 [2024-12-14 01:19:41.471442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.882 [2024-12-14 01:19:41.471481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:07.882 [2024-12-14 01:19:41.471490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.882 [2024-12-14 01:19:41.471499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.882 [2024-12-14 01:19:41.471555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.882 [2024-12-14 01:19:41.471564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:07.882 [2024-12-14 01:19:41.471572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.882 [2024-12-14 01:19:41.471580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.882 [2024-12-14 01:19:41.471672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.882 [2024-12-14 01:19:41.471687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:07.883 [2024-12-14 01:19:41.471697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.883 [2024-12-14 01:19:41.471706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.883 [2024-12-14 01:19:41.471721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.883 [2024-12-14 01:19:41.471732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:07.883 [2024-12-14 01:19:41.471739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.883 [2024-12-14 01:19:41.471751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.883 [2024-12-14 01:19:41.481316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.883 [2024-12-14 01:19:41.481363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:07.883 [2024-12-14 01:19:41.481376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.883 [2024-12-14 01:19:41.481385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.883 [2024-12-14 01:19:41.489231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.883 [2024-12-14 01:19:41.489272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:07.883 [2024-12-14 01:19:41.489283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.883 [2024-12-14 01:19:41.489292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.883 [2024-12-14 01:19:41.489337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.883 [2024-12-14 01:19:41.489350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:07.883 [2024-12-14 01:19:41.489358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.883 [2024-12-14 01:19:41.489370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.883 [2024-12-14 01:19:41.489431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.883 [2024-12-14 01:19:41.489443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:07.883 [2024-12-14 01:19:41.489451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.883 [2024-12-14 01:19:41.489460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.883 [2024-12-14 01:19:41.489534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.883 [2024-12-14 01:19:41.489547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:07.883 [2024-12-14 01:19:41.489555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.883 [2024-12-14 01:19:41.489563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.883 [2024-12-14 01:19:41.489598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.883 [2024-12-14 01:19:41.489608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:07.883 [2024-12-14 01:19:41.489617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.883 [2024-12-14 01:19:41.489642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.143 [2024-12-14 01:19:41.489682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.143 [2024-12-14 01:19:41.489694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:08.144 [2024-12-14 01:19:41.489701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.144 [2024-12-14 01:19:41.489713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.144 [2024-12-14 01:19:41.489762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.144 [2024-12-14 01:19:41.489774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:08.144 [2024-12-14 01:19:41.489784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.144 [2024-12-14 01:19:41.489793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.144 [2024-12-14 01:19:41.489922] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.943 ms, result 0 00:20:08.144 true 00:20:08.144 01:19:41 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 89742 00:20:08.144 01:19:41 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 89742 ']' 00:20:08.144 01:19:41 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 89742 00:20:08.144 01:19:41 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:08.144 01:19:41 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:08.144 01:19:41 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89742 00:20:08.144 01:19:41 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:08.144 01:19:41 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:08.144 killing process with pid 89742 00:20:08.144 01:19:41 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89742' 00:20:08.144 01:19:41 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 89742 00:20:08.144 01:19:41 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 89742 00:20:13.436 01:19:46 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:17.644 262144+0 records in 00:20:17.644 262144+0 records out 00:20:17.644 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.03531 s, 266 MB/s 00:20:17.645 01:19:50 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:19.030 01:19:52 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:19.030 [2024-12-14 01:19:52.579270] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:20:19.030 [2024-12-14 01:19:52.579816] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89956 ] 00:20:19.291 [2024-12-14 01:19:52.728109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.291 [2024-12-14 01:19:52.749062] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.291 [2024-12-14 01:19:52.861920] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:19.291 [2024-12-14 01:19:52.862006] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:19.554 [2024-12-14 01:19:53.021682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.554 [2024-12-14 01:19:53.021739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:19.554 [2024-12-14 01:19:53.021754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:19.554 [2024-12-14 01:19:53.021763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.554 [2024-12-14 01:19:53.021817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.554 [2024-12-14 01:19:53.021827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:19.554 [2024-12-14 01:19:53.021837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:19.554 [2024-12-14 01:19:53.021856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.554 [2024-12-14 01:19:53.021882] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:19.554 [2024-12-14 01:19:53.022431] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:19.554 [2024-12-14 01:19:53.022481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.554 [2024-12-14 01:19:53.022496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:19.554 [2024-12-14 01:19:53.022509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:20:19.554 [2024-12-14 01:19:53.022518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.554 [2024-12-14 01:19:53.024136] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:19.554 [2024-12-14 01:19:53.027877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.554 [2024-12-14 01:19:53.027923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:19.554 [2024-12-14 01:19:53.027935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.743 ms 00:20:19.554 [2024-12-14 01:19:53.027951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.554 [2024-12-14 01:19:53.028023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.554 [2024-12-14 01:19:53.028035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:19.554 [2024-12-14 01:19:53.028044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:19.554 [2024-12-14 01:19:53.028056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.554 [2024-12-14 01:19:53.035951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.554 [2024-12-14 01:19:53.035990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:19.554 [2024-12-14 01:19:53.036010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.853 ms 00:20:19.554 [2024-12-14 01:19:53.036021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.554 [2024-12-14 01:19:53.036122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.554 [2024-12-14 01:19:53.036133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:19.554 [2024-12-14 01:19:53.036145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:19.554 [2024-12-14 01:19:53.036154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.554 [2024-12-14 01:19:53.036212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.554 [2024-12-14 01:19:53.036223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:19.554 [2024-12-14 01:19:53.036232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:19.554 [2024-12-14 01:19:53.036244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.554 [2024-12-14 01:19:53.036267] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:19.554 [2024-12-14 01:19:53.038300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.554 [2024-12-14 01:19:53.038340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:19.554 [2024-12-14 01:19:53.038350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.040 ms 00:20:19.554 [2024-12-14 01:19:53.038358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.554 [2024-12-14 01:19:53.038399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.554 [2024-12-14 01:19:53.038409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:19.554 [2024-12-14 01:19:53.038418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:19.554 [2024-12-14 01:19:53.038427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.554 [2024-12-14 01:19:53.038452] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:19.554 [2024-12-14 01:19:53.038474] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:19.554 [2024-12-14 01:19:53.038520] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:19.554 [2024-12-14 01:19:53.038536] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:19.554 [2024-12-14 01:19:53.038672] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:19.554 [2024-12-14 01:19:53.038684] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:19.554 [2024-12-14 01:19:53.038698] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:19.554 [2024-12-14 01:19:53.038710] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:19.554 [2024-12-14 01:19:53.038718] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:19.554 [2024-12-14 01:19:53.038727] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:19.554 [2024-12-14 01:19:53.038740] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:19.554 [2024-12-14 01:19:53.038748] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:19.554 [2024-12-14 01:19:53.038756] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:19.554 [2024-12-14 01:19:53.038768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.554 [2024-12-14 01:19:53.038777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:19.554 [2024-12-14 01:19:53.038785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:20:19.554 [2024-12-14 01:19:53.038794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.554 [2024-12-14 01:19:53.038882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.554 [2024-12-14 01:19:53.038899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:19.555 [2024-12-14 01:19:53.038907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:19.555 [2024-12-14 01:19:53.038915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.555 [2024-12-14 01:19:53.039017] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:19.555 [2024-12-14 01:19:53.039030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:19.555 [2024-12-14 01:19:53.039039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:19.555 [2024-12-14 01:19:53.039048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.555 [2024-12-14 01:19:53.039057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:19.555 [2024-12-14 01:19:53.039066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:19.555 [2024-12-14 01:19:53.039074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:19.555 [2024-12-14 01:19:53.039083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:19.555 [2024-12-14 01:19:53.039091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:19.555 [2024-12-14 01:19:53.039099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:19.555 [2024-12-14 01:19:53.039107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:19.555 [2024-12-14 01:19:53.039117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:19.555 [2024-12-14 01:19:53.039125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:19.555 [2024-12-14 01:19:53.039133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:19.555 [2024-12-14 01:19:53.039141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:19.555 [2024-12-14 01:19:53.039149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.555 [2024-12-14 01:19:53.039157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:19.555 [2024-12-14 01:19:53.039167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:19.555 [2024-12-14 01:19:53.039177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.555 [2024-12-14 01:19:53.039185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:19.555 [2024-12-14 01:19:53.039193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:19.555 [2024-12-14 01:19:53.039201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.555 [2024-12-14 01:19:53.039209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:19.555 [2024-12-14 01:19:53.039217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:19.555 [2024-12-14 01:19:53.039225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.555 [2024-12-14 01:19:53.039233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:19.555 [2024-12-14 01:19:53.039242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:19.555 [2024-12-14 01:19:53.039254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.555 [2024-12-14 01:19:53.039262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:19.555 [2024-12-14 01:19:53.039269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:19.555 [2024-12-14 01:19:53.039277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.555 [2024-12-14 01:19:53.039284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:19.555 [2024-12-14 01:19:53.039292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:19.555 [2024-12-14 01:19:53.039300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:19.555 [2024-12-14 01:19:53.039307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:19.555 [2024-12-14 01:19:53.039314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:19.555 [2024-12-14 01:19:53.039322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:19.555 [2024-12-14 01:19:53.039330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:19.555 [2024-12-14 01:19:53.039338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:19.555 [2024-12-14 01:19:53.039346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.555 [2024-12-14 01:19:53.039353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:19.555 [2024-12-14 01:19:53.039361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:19.555 [2024-12-14 01:19:53.039368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.555 [2024-12-14 01:19:53.039376] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:19.555 [2024-12-14 01:19:53.039387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:19.555 [2024-12-14 01:19:53.039395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:19.555 [2024-12-14 01:19:53.039403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.555 [2024-12-14 01:19:53.039410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:19.555 [2024-12-14 01:19:53.039417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:19.555 [2024-12-14 01:19:53.039425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:19.555 [2024-12-14 01:19:53.039433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:19.555 [2024-12-14 01:19:53.039441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:19.555 [2024-12-14 01:19:53.039448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:19.555 [2024-12-14 01:19:53.039456] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:19.555 [2024-12-14 01:19:53.039467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:19.555 [2024-12-14 01:19:53.039476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:19.555 [2024-12-14 01:19:53.039483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:19.555 [2024-12-14 01:19:53.039491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:19.555 [2024-12-14 01:19:53.039498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:19.555 [2024-12-14 01:19:53.039507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:19.555 [2024-12-14 01:19:53.039515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:19.555 [2024-12-14 01:19:53.039522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:19.555 [2024-12-14 01:19:53.039531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:19.555 [2024-12-14 01:19:53.039538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:19.555 [2024-12-14 01:19:53.039551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:19.555 [2024-12-14 01:19:53.039559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:19.555 [2024-12-14 01:19:53.039566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:19.555 [2024-12-14 01:19:53.039573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:19.555 [2024-12-14 01:19:53.039581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:19.555 [2024-12-14 01:19:53.039589] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:19.555 [2024-12-14 01:19:53.039597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:19.555 [2024-12-14 01:19:53.039605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:19.555 [2024-12-14 01:19:53.039613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:19.555 [2024-12-14 01:19:53.039643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:19.555 [2024-12-14 01:19:53.039652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:19.555 [2024-12-14 01:19:53.039663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.555 [2024-12-14 01:19:53.039671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:19.555 [2024-12-14 01:19:53.039684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:20:19.555 [2024-12-14 01:19:53.039694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.555 [2024-12-14 01:19:53.052983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.555 [2024-12-14 01:19:53.053029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:19.555 [2024-12-14 01:19:53.053044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.244 ms 00:20:19.555 [2024-12-14 01:19:53.053052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.555 [2024-12-14 01:19:53.053134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.555 [2024-12-14 01:19:53.053151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:19.555 [2024-12-14 01:19:53.053160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:19.555 [2024-12-14 01:19:53.053168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.555 [2024-12-14 01:19:53.076786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.555 [2024-12-14 01:19:53.076867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:19.555 [2024-12-14 01:19:53.076891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.553 ms 00:20:19.555 [2024-12-14 01:19:53.076907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.555 [2024-12-14 01:19:53.076989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.555 [2024-12-14 01:19:53.077010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:19.556 [2024-12-14 01:19:53.077028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:19.556 [2024-12-14 01:19:53.077043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.556 [2024-12-14 01:19:53.077824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.556 [2024-12-14 01:19:53.077887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:19.556 [2024-12-14 01:19:53.077907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:20:19.556 [2024-12-14 01:19:53.077922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.556 [2024-12-14 01:19:53.078186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.556 [2024-12-14 01:19:53.078206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:19.556 [2024-12-14 01:19:53.078223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:20:19.556 [2024-12-14 01:19:53.078248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.556 [2024-12-14 01:19:53.086714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.556 [2024-12-14 01:19:53.086758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:19.556 [2024-12-14 01:19:53.086770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.430 ms 00:20:19.556 [2024-12-14 01:19:53.086778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.556 [2024-12-14 01:19:53.090570] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:19.556 [2024-12-14 01:19:53.090638] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:19.556 [2024-12-14 01:19:53.090651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.556 [2024-12-14 01:19:53.090659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:19.556 [2024-12-14 01:19:53.090669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.776 ms 00:20:19.556 [2024-12-14 01:19:53.090676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.556 [2024-12-14 01:19:53.111497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.556 [2024-12-14 01:19:53.111576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:19.556 [2024-12-14 01:19:53.111591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.755 ms 00:20:19.556 [2024-12-14 01:19:53.111600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.556 [2024-12-14 01:19:53.114961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.556 [2024-12-14 01:19:53.115016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:19.556 [2024-12-14 01:19:53.115026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.275 ms 00:20:19.556 [2024-12-14 01:19:53.115034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.556 [2024-12-14 01:19:53.118035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.556 [2024-12-14 01:19:53.118085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:19.556 [2024-12-14 01:19:53.118095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.950 ms 00:20:19.556 [2024-12-14 01:19:53.118102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.556 [2024-12-14 01:19:53.118457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.556 [2024-12-14 01:19:53.118471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:19.556 [2024-12-14 01:19:53.118480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:20:19.556 [2024-12-14 01:19:53.118488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.556 [2024-12-14 01:19:53.144095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.556 [2024-12-14 01:19:53.144154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:19.556 [2024-12-14 01:19:53.144166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.584 ms 00:20:19.556 [2024-12-14 01:19:53.144175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.556 [2024-12-14 01:19:53.152244] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:19.556 [2024-12-14 01:19:53.155321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.556 [2024-12-14 01:19:53.155370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:19.556 [2024-12-14 01:19:53.155386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.095 ms 00:20:19.556 [2024-12-14 01:19:53.155397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.556 [2024-12-14 01:19:53.155477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.556 [2024-12-14 01:19:53.155488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:19.556 [2024-12-14 01:19:53.155501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:19.556 [2024-12-14 01:19:53.155515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.556 [2024-12-14 01:19:53.155588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.556 [2024-12-14 01:19:53.155599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:19.556 [2024-12-14 01:19:53.155612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:19.556 [2024-12-14 01:19:53.155648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.556 [2024-12-14 01:19:53.155669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.556 [2024-12-14 01:19:53.155678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:19.556 [2024-12-14 01:19:53.155686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:19.556 [2024-12-14 01:19:53.155694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.556 [2024-12-14 01:19:53.155732] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:19.556 [2024-12-14 01:19:53.155743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.556 [2024-12-14 01:19:53.155752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:19.556 [2024-12-14 01:19:53.155760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:19.556 [2024-12-14 01:19:53.155767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.556 [2024-12-14 01:19:53.161307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.556 [2024-12-14 01:19:53.161355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:19.556 [2024-12-14 01:19:53.161366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.519 ms 00:20:19.556 [2024-12-14 01:19:53.161373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.556 [2024-12-14 01:19:53.161483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.556 [2024-12-14 01:19:53.161497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:19.556 [2024-12-14 01:19:53.161506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:19.556 [2024-12-14 01:19:53.161517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.817 [2024-12-14 01:19:53.162666] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 140.513 ms, result 0 00:20:20.761  [2024-12-14T01:19:55.317Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-14T01:19:56.262Z] Copying: 42/1024 [MB] (18 MBps) [2024-12-14T01:19:57.206Z] Copying: 58/1024 [MB] (16 MBps) [2024-12-14T01:19:58.593Z] Copying: 70/1024 [MB] (11 MBps) [2024-12-14T01:19:59.538Z] Copying: 85/1024 [MB] (15 MBps) [2024-12-14T01:20:00.483Z] Copying: 100/1024 [MB] (14 MBps) [2024-12-14T01:20:01.425Z] Copying: 118/1024 [MB] (18 MBps) [2024-12-14T01:20:02.426Z] Copying: 129/1024 [MB] (10 MBps) [2024-12-14T01:20:03.369Z] Copying: 154/1024 [MB] (25 MBps) [2024-12-14T01:20:04.313Z] Copying: 187/1024 [MB] (33 MBps) [2024-12-14T01:20:05.261Z] Copying: 219/1024 [MB] (31 MBps) [2024-12-14T01:20:06.205Z] Copying: 247/1024 [MB] (28 MBps) [2024-12-14T01:20:07.592Z] Copying: 265/1024 [MB] (17 MBps) [2024-12-14T01:20:08.535Z] Copying: 282/1024 [MB] (17 MBps) [2024-12-14T01:20:09.480Z] Copying: 299/1024 [MB] (16 MBps) [2024-12-14T01:20:10.424Z] Copying: 309/1024 [MB] (10 MBps) [2024-12-14T01:20:11.368Z] Copying: 329/1024 [MB] (19 MBps) [2024-12-14T01:20:12.313Z] Copying: 353/1024 [MB] (24 MBps) [2024-12-14T01:20:13.258Z] Copying: 363/1024 [MB] (10 MBps) [2024-12-14T01:20:14.203Z] Copying: 375/1024 [MB] (11 MBps) [2024-12-14T01:20:15.592Z] Copying: 385/1024 [MB] (10 MBps) [2024-12-14T01:20:16.537Z] Copying: 404608/1048576 [kB] (10056 kBps) [2024-12-14T01:20:17.482Z] Copying: 405/1024 [MB] (10 MBps) [2024-12-14T01:20:18.426Z] Copying: 415/1024 [MB] (10 MBps) [2024-12-14T01:20:19.371Z] Copying: 434/1024 [MB] (18 MBps) [2024-12-14T01:20:20.314Z] Copying: 455/1024 [MB] (21 MBps) [2024-12-14T01:20:21.258Z] Copying: 495/1024 [MB] (40 MBps) [2024-12-14T01:20:22.201Z] Copying: 516/1024 [MB] (21 MBps) [2024-12-14T01:20:23.589Z] Copying: 536/1024 [MB] (19 MBps) [2024-12-14T01:20:24.533Z] Copying: 552/1024 [MB] (16 MBps) [2024-12-14T01:20:25.476Z] Copying: 592/1024 [MB] (40 MBps) [2024-12-14T01:20:26.418Z] Copying: 620/1024 [MB] (27 MBps) [2024-12-14T01:20:27.360Z] Copying: 637/1024 [MB] (17 MBps) [2024-12-14T01:20:28.344Z] Copying: 658/1024 [MB] (21 MBps) [2024-12-14T01:20:29.322Z] Copying: 680/1024 [MB] (21 MBps) [2024-12-14T01:20:30.267Z] Copying: 692/1024 [MB] (12 MBps) [2024-12-14T01:20:31.211Z] Copying: 714/1024 [MB] (22 MBps) [2024-12-14T01:20:32.599Z] Copying: 736/1024 [MB] (22 MBps) [2024-12-14T01:20:33.541Z] Copying: 761/1024 [MB] (24 MBps) [2024-12-14T01:20:34.485Z] Copying: 781/1024 [MB] (20 MBps) [2024-12-14T01:20:35.425Z] Copying: 815/1024 [MB] (33 MBps) [2024-12-14T01:20:36.368Z] Copying: 856/1024 [MB] (41 MBps) [2024-12-14T01:20:37.313Z] Copying: 879/1024 [MB] (23 MBps) [2024-12-14T01:20:38.257Z] Copying: 891/1024 [MB] (12 MBps) [2024-12-14T01:20:39.201Z] Copying: 913/1024 [MB] (22 MBps) [2024-12-14T01:20:40.587Z] Copying: 935/1024 [MB] (21 MBps) [2024-12-14T01:20:41.529Z] Copying: 973/1024 [MB] (38 MBps) [2024-12-14T01:20:42.474Z] Copying: 987/1024 [MB] (14 MBps) [2024-12-14T01:20:43.419Z] Copying: 1005/1024 [MB] (17 MBps) [2024-12-14T01:20:43.419Z] Copying: 1023/1024 [MB] (17 MBps) [2024-12-14T01:20:43.419Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-12-14 01:20:43.219013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.807 [2024-12-14 01:20:43.219072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:09.807 [2024-12-14 01:20:43.219088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:09.807 [2024-12-14 01:20:43.219105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.807 [2024-12-14 01:20:43.219128] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:09.807 [2024-12-14 01:20:43.219943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.807 [2024-12-14 01:20:43.219983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:09.807 [2024-12-14 01:20:43.219996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:21:09.807 [2024-12-14 01:20:43.220005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.807 [2024-12-14 01:20:43.222648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.807 [2024-12-14 01:20:43.222698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:09.807 [2024-12-14 01:20:43.222708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.614 ms 00:21:09.807 [2024-12-14 01:20:43.222716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.807 [2024-12-14 01:20:43.240698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.807 [2024-12-14 01:20:43.240765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:09.807 [2024-12-14 01:20:43.240777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.952 ms 00:21:09.807 [2024-12-14 01:20:43.240786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.807 [2024-12-14 01:20:43.246899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.807 [2024-12-14 01:20:43.246939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:09.807 [2024-12-14 01:20:43.246951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.068 ms 00:21:09.807 [2024-12-14 01:20:43.246959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.807 [2024-12-14 01:20:43.250142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.807 [2024-12-14 01:20:43.250197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:09.807 [2024-12-14 01:20:43.250207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.111 ms 00:21:09.807 [2024-12-14 01:20:43.250215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.807 [2024-12-14 01:20:43.254971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.807 [2024-12-14 01:20:43.255030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:09.807 [2024-12-14 01:20:43.255041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.708 ms 00:21:09.807 [2024-12-14 01:20:43.255049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.807 [2024-12-14 01:20:43.255185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.807 [2024-12-14 01:20:43.255196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:09.807 [2024-12-14 01:20:43.255205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:21:09.807 [2024-12-14 01:20:43.255213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.807 [2024-12-14 01:20:43.258583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.807 [2024-12-14 01:20:43.258651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:09.808 [2024-12-14 01:20:43.258663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.350 ms 00:21:09.808 [2024-12-14 01:20:43.258670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.808 [2024-12-14 01:20:43.261679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.808 [2024-12-14 01:20:43.261725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:09.808 [2024-12-14 01:20:43.261735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.963 ms 00:21:09.808 [2024-12-14 01:20:43.261742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.808 [2024-12-14 01:20:43.264395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.808 [2024-12-14 01:20:43.264447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:09.808 [2024-12-14 01:20:43.264456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.606 ms 00:21:09.808 [2024-12-14 01:20:43.264463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.808 [2024-12-14 01:20:43.266935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.808 [2024-12-14 01:20:43.266988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:09.808 [2024-12-14 01:20:43.266997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.398 ms 00:21:09.808 [2024-12-14 01:20:43.267004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.808 [2024-12-14 01:20:43.267045] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:09.808 [2024-12-14 01:20:43.267061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:09.808 [2024-12-14 01:20:43.267695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:09.809 [2024-12-14 01:20:43.267704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:09.809 [2024-12-14 01:20:43.267712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:09.809 [2024-12-14 01:20:43.267720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:09.809 [2024-12-14 01:20:43.267727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:09.809 [2024-12-14 01:20:43.267735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:09.809 [2024-12-14 01:20:43.267742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:09.809 [2024-12-14 01:20:43.267751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:09.809 [2024-12-14 01:20:43.267758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:09.809 [2024-12-14 01:20:43.267766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:09.809 [2024-12-14 01:20:43.267773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:09.809 [2024-12-14 01:20:43.267781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:09.809 [2024-12-14 01:20:43.267790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:09.809 [2024-12-14 01:20:43.267799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:09.809 [2024-12-14 01:20:43.267807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:09.809 [2024-12-14 01:20:43.267815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:09.809 [2024-12-14 01:20:43.267823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:09.809 [2024-12-14 01:20:43.267831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:09.809 [2024-12-14 01:20:43.267838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:09.809 [2024-12-14 01:20:43.267846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:09.809 [2024-12-14 01:20:43.267854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:09.809 [2024-12-14 01:20:43.267870] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:09.809 [2024-12-14 01:20:43.267885] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6e33b643-f118-434f-b82d-1788ad8d0b55 00:21:09.809 [2024-12-14 01:20:43.267896] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:09.809 [2024-12-14 01:20:43.267904] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:09.809 [2024-12-14 01:20:43.267911] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:09.809 [2024-12-14 01:20:43.267920] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:09.809 [2024-12-14 01:20:43.267928] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:09.809 [2024-12-14 01:20:43.267940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:09.809 [2024-12-14 01:20:43.267951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:09.809 [2024-12-14 01:20:43.267958] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:09.809 [2024-12-14 01:20:43.267965] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:09.809 [2024-12-14 01:20:43.267972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.809 [2024-12-14 01:20:43.267980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:09.809 [2024-12-14 01:20:43.267997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:21:09.809 [2024-12-14 01:20:43.268005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.809 [2024-12-14 01:20:43.270406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.809 [2024-12-14 01:20:43.270449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:09.809 [2024-12-14 01:20:43.270460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.382 ms 00:21:09.809 [2024-12-14 01:20:43.270469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.809 [2024-12-14 01:20:43.270594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.809 [2024-12-14 01:20:43.270604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:09.809 [2024-12-14 01:20:43.270618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:21:09.809 [2024-12-14 01:20:43.270646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.809 [2024-12-14 01:20:43.278391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.809 [2024-12-14 01:20:43.278445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:09.809 [2024-12-14 01:20:43.278456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.809 [2024-12-14 01:20:43.278472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.809 [2024-12-14 01:20:43.278537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.809 [2024-12-14 01:20:43.278546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:09.809 [2024-12-14 01:20:43.278554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.809 [2024-12-14 01:20:43.278562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.809 [2024-12-14 01:20:43.278643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.809 [2024-12-14 01:20:43.278660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:09.809 [2024-12-14 01:20:43.278668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.809 [2024-12-14 01:20:43.278676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.809 [2024-12-14 01:20:43.278692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.809 [2024-12-14 01:20:43.278702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:09.809 [2024-12-14 01:20:43.278710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.809 [2024-12-14 01:20:43.278718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.809 [2024-12-14 01:20:43.292494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.809 [2024-12-14 01:20:43.292548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:09.809 [2024-12-14 01:20:43.292560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.809 [2024-12-14 01:20:43.292568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.809 [2024-12-14 01:20:43.302832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.809 [2024-12-14 01:20:43.302886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:09.809 [2024-12-14 01:20:43.302897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.809 [2024-12-14 01:20:43.302905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.809 [2024-12-14 01:20:43.302957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.809 [2024-12-14 01:20:43.302966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:09.809 [2024-12-14 01:20:43.302974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.809 [2024-12-14 01:20:43.302983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.809 [2024-12-14 01:20:43.303018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.809 [2024-12-14 01:20:43.303026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:09.809 [2024-12-14 01:20:43.303037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.809 [2024-12-14 01:20:43.303048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.809 [2024-12-14 01:20:43.303116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.809 [2024-12-14 01:20:43.303126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:09.809 [2024-12-14 01:20:43.303134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.809 [2024-12-14 01:20:43.303142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.809 [2024-12-14 01:20:43.303170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.809 [2024-12-14 01:20:43.303180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:09.809 [2024-12-14 01:20:43.303187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.809 [2024-12-14 01:20:43.303198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.809 [2024-12-14 01:20:43.303238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.809 [2024-12-14 01:20:43.303247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:09.809 [2024-12-14 01:20:43.303254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.809 [2024-12-14 01:20:43.303262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.809 [2024-12-14 01:20:43.303308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.809 [2024-12-14 01:20:43.303318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:09.809 [2024-12-14 01:20:43.303327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.809 [2024-12-14 01:20:43.303337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.809 [2024-12-14 01:20:43.303476] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 84.426 ms, result 0 00:21:10.070 00:21:10.070 00:21:10.070 01:20:43 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:10.332 [2024-12-14 01:20:43.731100] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:21:10.332 [2024-12-14 01:20:43.731432] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90478 ] 00:21:10.332 [2024-12-14 01:20:43.875232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.332 [2024-12-14 01:20:43.903727] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.594 [2024-12-14 01:20:44.016894] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:10.594 [2024-12-14 01:20:44.016968] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:10.594 [2024-12-14 01:20:44.178282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.594 [2024-12-14 01:20:44.178342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:10.594 [2024-12-14 01:20:44.178357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:10.594 [2024-12-14 01:20:44.178372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.594 [2024-12-14 01:20:44.178432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.594 [2024-12-14 01:20:44.178443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:10.594 [2024-12-14 01:20:44.178453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:10.594 [2024-12-14 01:20:44.178467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.594 [2024-12-14 01:20:44.178503] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:10.594 [2024-12-14 01:20:44.178806] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:10.594 [2024-12-14 01:20:44.178831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.594 [2024-12-14 01:20:44.178843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:10.594 [2024-12-14 01:20:44.178855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:21:10.594 [2024-12-14 01:20:44.178863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.594 [2024-12-14 01:20:44.180723] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:10.594 [2024-12-14 01:20:44.184706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.594 [2024-12-14 01:20:44.184754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:10.594 [2024-12-14 01:20:44.184766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.986 ms 00:21:10.594 [2024-12-14 01:20:44.184784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.594 [2024-12-14 01:20:44.184867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.594 [2024-12-14 01:20:44.184879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:10.594 [2024-12-14 01:20:44.184889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:10.594 [2024-12-14 01:20:44.184897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.594 [2024-12-14 01:20:44.193402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.594 [2024-12-14 01:20:44.193452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:10.594 [2024-12-14 01:20:44.193466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.462 ms 00:21:10.594 [2024-12-14 01:20:44.193474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.594 [2024-12-14 01:20:44.193573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.594 [2024-12-14 01:20:44.193583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:10.594 [2024-12-14 01:20:44.193593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:21:10.594 [2024-12-14 01:20:44.193601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.594 [2024-12-14 01:20:44.193689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.594 [2024-12-14 01:20:44.193704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:10.594 [2024-12-14 01:20:44.193713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:10.594 [2024-12-14 01:20:44.193726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.594 [2024-12-14 01:20:44.193749] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:10.594 [2024-12-14 01:20:44.195761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.594 [2024-12-14 01:20:44.195794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:10.594 [2024-12-14 01:20:44.195805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.018 ms 00:21:10.594 [2024-12-14 01:20:44.195813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.594 [2024-12-14 01:20:44.195853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.594 [2024-12-14 01:20:44.195866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:10.594 [2024-12-14 01:20:44.195874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:10.594 [2024-12-14 01:20:44.195884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.594 [2024-12-14 01:20:44.195909] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:10.594 [2024-12-14 01:20:44.195932] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:10.594 [2024-12-14 01:20:44.195979] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:10.594 [2024-12-14 01:20:44.195995] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:10.594 [2024-12-14 01:20:44.196109] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:10.594 [2024-12-14 01:20:44.196121] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:10.594 [2024-12-14 01:20:44.196135] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:10.594 [2024-12-14 01:20:44.196145] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:10.594 [2024-12-14 01:20:44.196155] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:10.594 [2024-12-14 01:20:44.196167] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:10.594 [2024-12-14 01:20:44.196174] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:10.594 [2024-12-14 01:20:44.196181] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:10.594 [2024-12-14 01:20:44.196193] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:10.594 [2024-12-14 01:20:44.196200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.594 [2024-12-14 01:20:44.196208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:10.594 [2024-12-14 01:20:44.196216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:21:10.594 [2024-12-14 01:20:44.196227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.594 [2024-12-14 01:20:44.196313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.594 [2024-12-14 01:20:44.196321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:10.594 [2024-12-14 01:20:44.196329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:10.594 [2024-12-14 01:20:44.196336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.594 [2024-12-14 01:20:44.196437] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:10.594 [2024-12-14 01:20:44.196448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:10.594 [2024-12-14 01:20:44.196459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:10.594 [2024-12-14 01:20:44.196467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:10.594 [2024-12-14 01:20:44.196476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:10.594 [2024-12-14 01:20:44.196484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:10.594 [2024-12-14 01:20:44.196493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:10.594 [2024-12-14 01:20:44.196502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:10.594 [2024-12-14 01:20:44.196510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:10.594 [2024-12-14 01:20:44.196518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:10.594 [2024-12-14 01:20:44.196526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:10.594 [2024-12-14 01:20:44.196538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:10.594 [2024-12-14 01:20:44.196545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:10.594 [2024-12-14 01:20:44.196553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:10.594 [2024-12-14 01:20:44.196561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:10.594 [2024-12-14 01:20:44.196568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:10.595 [2024-12-14 01:20:44.196576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:10.595 [2024-12-14 01:20:44.196584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:10.595 [2024-12-14 01:20:44.196594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:10.595 [2024-12-14 01:20:44.196602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:10.595 [2024-12-14 01:20:44.196610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:10.595 [2024-12-14 01:20:44.196634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:10.595 [2024-12-14 01:20:44.196643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:10.595 [2024-12-14 01:20:44.196651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:10.595 [2024-12-14 01:20:44.196658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:10.595 [2024-12-14 01:20:44.196666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:10.595 [2024-12-14 01:20:44.196674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:10.595 [2024-12-14 01:20:44.196686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:10.595 [2024-12-14 01:20:44.196694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:10.595 [2024-12-14 01:20:44.196702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:10.595 [2024-12-14 01:20:44.196710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:10.595 [2024-12-14 01:20:44.196718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:10.595 [2024-12-14 01:20:44.196726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:10.595 [2024-12-14 01:20:44.196733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:10.595 [2024-12-14 01:20:44.196742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:10.595 [2024-12-14 01:20:44.196749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:10.595 [2024-12-14 01:20:44.196757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:10.595 [2024-12-14 01:20:44.196765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:10.595 [2024-12-14 01:20:44.196772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:10.595 [2024-12-14 01:20:44.196779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:10.595 [2024-12-14 01:20:44.196786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:10.595 [2024-12-14 01:20:44.196793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:10.595 [2024-12-14 01:20:44.196801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:10.595 [2024-12-14 01:20:44.196810] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:10.595 [2024-12-14 01:20:44.196821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:10.595 [2024-12-14 01:20:44.196829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:10.595 [2024-12-14 01:20:44.196837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:10.595 [2024-12-14 01:20:44.196845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:10.595 [2024-12-14 01:20:44.196852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:10.595 [2024-12-14 01:20:44.196859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:10.595 [2024-12-14 01:20:44.196868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:10.595 [2024-12-14 01:20:44.196874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:10.595 [2024-12-14 01:20:44.196881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:10.595 [2024-12-14 01:20:44.196889] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:10.595 [2024-12-14 01:20:44.196903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:10.595 [2024-12-14 01:20:44.196911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:10.595 [2024-12-14 01:20:44.196919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:10.595 [2024-12-14 01:20:44.196926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:10.595 [2024-12-14 01:20:44.196933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:10.595 [2024-12-14 01:20:44.196942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:10.595 [2024-12-14 01:20:44.196950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:10.595 [2024-12-14 01:20:44.196958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:10.595 [2024-12-14 01:20:44.196965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:10.595 [2024-12-14 01:20:44.196972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:10.595 [2024-12-14 01:20:44.196985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:10.595 [2024-12-14 01:20:44.196992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:10.595 [2024-12-14 01:20:44.196999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:10.595 [2024-12-14 01:20:44.197006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:10.595 [2024-12-14 01:20:44.197013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:10.595 [2024-12-14 01:20:44.197020] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:10.595 [2024-12-14 01:20:44.197028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:10.595 [2024-12-14 01:20:44.197037] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:10.595 [2024-12-14 01:20:44.197044] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:10.595 [2024-12-14 01:20:44.197052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:10.595 [2024-12-14 01:20:44.197059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:10.595 [2024-12-14 01:20:44.197068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.595 [2024-12-14 01:20:44.197076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:10.595 [2024-12-14 01:20:44.197083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:21:10.595 [2024-12-14 01:20:44.197093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.857 [2024-12-14 01:20:44.210617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.857 [2024-12-14 01:20:44.210688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:10.857 [2024-12-14 01:20:44.210699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.479 ms 00:21:10.857 [2024-12-14 01:20:44.210707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.857 [2024-12-14 01:20:44.210793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.857 [2024-12-14 01:20:44.210802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:10.857 [2024-12-14 01:20:44.210810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:21:10.857 [2024-12-14 01:20:44.210817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.857 [2024-12-14 01:20:44.232605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.857 [2024-12-14 01:20:44.232677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:10.857 [2024-12-14 01:20:44.232689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.730 ms 00:21:10.857 [2024-12-14 01:20:44.232702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.857 [2024-12-14 01:20:44.232755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.857 [2024-12-14 01:20:44.232765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:10.857 [2024-12-14 01:20:44.232774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:10.857 [2024-12-14 01:20:44.232783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.857 [2024-12-14 01:20:44.233324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.857 [2024-12-14 01:20:44.233367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:10.857 [2024-12-14 01:20:44.233379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:21:10.857 [2024-12-14 01:20:44.233389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.857 [2024-12-14 01:20:44.233577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.857 [2024-12-14 01:20:44.233591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:10.857 [2024-12-14 01:20:44.233601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:21:10.857 [2024-12-14 01:20:44.233609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.857 [2024-12-14 01:20:44.240992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.857 [2024-12-14 01:20:44.241032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:10.857 [2024-12-14 01:20:44.241043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.337 ms 00:21:10.857 [2024-12-14 01:20:44.241051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.857 [2024-12-14 01:20:44.244820] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:10.857 [2024-12-14 01:20:44.244864] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:10.857 [2024-12-14 01:20:44.244882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.857 [2024-12-14 01:20:44.244892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:10.857 [2024-12-14 01:20:44.244903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.720 ms 00:21:10.857 [2024-12-14 01:20:44.244911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.857 [2024-12-14 01:20:44.260630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.857 [2024-12-14 01:20:44.260679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:10.857 [2024-12-14 01:20:44.260691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.653 ms 00:21:10.857 [2024-12-14 01:20:44.260703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.857 [2024-12-14 01:20:44.263716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.857 [2024-12-14 01:20:44.263754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:10.857 [2024-12-14 01:20:44.263764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.957 ms 00:21:10.857 [2024-12-14 01:20:44.263771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.857 [2024-12-14 01:20:44.266509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.857 [2024-12-14 01:20:44.266554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:10.857 [2024-12-14 01:20:44.266564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.692 ms 00:21:10.857 [2024-12-14 01:20:44.266571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.857 [2024-12-14 01:20:44.266932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.857 [2024-12-14 01:20:44.266953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:10.857 [2024-12-14 01:20:44.266969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:21:10.857 [2024-12-14 01:20:44.266977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.857 [2024-12-14 01:20:44.291590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.857 [2024-12-14 01:20:44.291655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:10.857 [2024-12-14 01:20:44.291667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.584 ms 00:21:10.857 [2024-12-14 01:20:44.291679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.857 [2024-12-14 01:20:44.299682] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:10.857 [2024-12-14 01:20:44.302605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.857 [2024-12-14 01:20:44.302662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:10.857 [2024-12-14 01:20:44.302673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.881 ms 00:21:10.857 [2024-12-14 01:20:44.302688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.857 [2024-12-14 01:20:44.302762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.857 [2024-12-14 01:20:44.302773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:10.857 [2024-12-14 01:20:44.302791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:10.857 [2024-12-14 01:20:44.302799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.857 [2024-12-14 01:20:44.302865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.857 [2024-12-14 01:20:44.302878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:10.857 [2024-12-14 01:20:44.302887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:10.857 [2024-12-14 01:20:44.302894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.857 [2024-12-14 01:20:44.302914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.857 [2024-12-14 01:20:44.302922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:10.857 [2024-12-14 01:20:44.302930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:10.857 [2024-12-14 01:20:44.302937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.857 [2024-12-14 01:20:44.302974] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:10.857 [2024-12-14 01:20:44.302988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.857 [2024-12-14 01:20:44.302996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:10.857 [2024-12-14 01:20:44.303006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:10.857 [2024-12-14 01:20:44.303014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.857 [2024-12-14 01:20:44.308078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.857 [2024-12-14 01:20:44.308117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:10.857 [2024-12-14 01:20:44.308128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.046 ms 00:21:10.857 [2024-12-14 01:20:44.308136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.857 [2024-12-14 01:20:44.308211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.857 [2024-12-14 01:20:44.308221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:10.857 [2024-12-14 01:20:44.308238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:10.857 [2024-12-14 01:20:44.308255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.857 [2024-12-14 01:20:44.309308] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 130.595 ms, result 0 00:21:12.243  [2024-12-14T01:20:46.799Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-14T01:20:47.743Z] Copying: 27/1024 [MB] (14 MBps) [2024-12-14T01:20:48.687Z] Copying: 38/1024 [MB] (11 MBps) [2024-12-14T01:20:49.631Z] Copying: 55/1024 [MB] (17 MBps) [2024-12-14T01:20:50.575Z] Copying: 70/1024 [MB] (14 MBps) [2024-12-14T01:20:51.520Z] Copying: 88/1024 [MB] (18 MBps) [2024-12-14T01:20:52.906Z] Copying: 106/1024 [MB] (17 MBps) [2024-12-14T01:20:53.861Z] Copying: 123/1024 [MB] (17 MBps) [2024-12-14T01:20:54.850Z] Copying: 139/1024 [MB] (15 MBps) [2024-12-14T01:20:55.792Z] Copying: 164/1024 [MB] (25 MBps) [2024-12-14T01:20:56.737Z] Copying: 186/1024 [MB] (21 MBps) [2024-12-14T01:20:57.683Z] Copying: 205/1024 [MB] (19 MBps) [2024-12-14T01:20:58.628Z] Copying: 222/1024 [MB] (16 MBps) [2024-12-14T01:20:59.571Z] Copying: 245/1024 [MB] (23 MBps) [2024-12-14T01:21:00.514Z] Copying: 258/1024 [MB] (13 MBps) [2024-12-14T01:21:01.902Z] Copying: 277/1024 [MB] (18 MBps) [2024-12-14T01:21:02.847Z] Copying: 299/1024 [MB] (22 MBps) [2024-12-14T01:21:03.794Z] Copying: 317/1024 [MB] (17 MBps) [2024-12-14T01:21:04.739Z] Copying: 338/1024 [MB] (21 MBps) [2024-12-14T01:21:05.683Z] Copying: 353/1024 [MB] (14 MBps) [2024-12-14T01:21:06.627Z] Copying: 373/1024 [MB] (20 MBps) [2024-12-14T01:21:07.574Z] Copying: 387/1024 [MB] (14 MBps) [2024-12-14T01:21:08.518Z] Copying: 402/1024 [MB] (14 MBps) [2024-12-14T01:21:09.906Z] Copying: 421/1024 [MB] (18 MBps) [2024-12-14T01:21:10.850Z] Copying: 440/1024 [MB] (19 MBps) [2024-12-14T01:21:11.792Z] Copying: 457/1024 [MB] (17 MBps) [2024-12-14T01:21:12.735Z] Copying: 472/1024 [MB] (14 MBps) [2024-12-14T01:21:13.680Z] Copying: 488/1024 [MB] (16 MBps) [2024-12-14T01:21:14.625Z] Copying: 500/1024 [MB] (12 MBps) [2024-12-14T01:21:15.568Z] Copying: 515/1024 [MB] (15 MBps) [2024-12-14T01:21:16.511Z] Copying: 526/1024 [MB] (10 MBps) [2024-12-14T01:21:17.916Z] Copying: 536/1024 [MB] (10 MBps) [2024-12-14T01:21:18.858Z] Copying: 547/1024 [MB] (11 MBps) [2024-12-14T01:21:19.802Z] Copying: 561/1024 [MB] (13 MBps) [2024-12-14T01:21:20.795Z] Copying: 576/1024 [MB] (14 MBps) [2024-12-14T01:21:21.740Z] Copying: 586/1024 [MB] (10 MBps) [2024-12-14T01:21:22.685Z] Copying: 597/1024 [MB] (11 MBps) [2024-12-14T01:21:23.630Z] Copying: 608/1024 [MB] (10 MBps) [2024-12-14T01:21:24.575Z] Copying: 620/1024 [MB] (11 MBps) [2024-12-14T01:21:25.520Z] Copying: 649/1024 [MB] (28 MBps) [2024-12-14T01:21:26.912Z] Copying: 662/1024 [MB] (13 MBps) [2024-12-14T01:21:27.856Z] Copying: 678/1024 [MB] (16 MBps) [2024-12-14T01:21:28.802Z] Copying: 696/1024 [MB] (17 MBps) [2024-12-14T01:21:29.747Z] Copying: 706/1024 [MB] (10 MBps) [2024-12-14T01:21:30.693Z] Copying: 717/1024 [MB] (10 MBps) [2024-12-14T01:21:31.640Z] Copying: 727/1024 [MB] (10 MBps) [2024-12-14T01:21:32.585Z] Copying: 738/1024 [MB] (10 MBps) [2024-12-14T01:21:33.529Z] Copying: 749/1024 [MB] (10 MBps) [2024-12-14T01:21:34.916Z] Copying: 767/1024 [MB] (18 MBps) [2024-12-14T01:21:35.858Z] Copying: 790/1024 [MB] (22 MBps) [2024-12-14T01:21:36.802Z] Copying: 812/1024 [MB] (22 MBps) [2024-12-14T01:21:37.747Z] Copying: 828/1024 [MB] (16 MBps) [2024-12-14T01:21:38.691Z] Copying: 850/1024 [MB] (21 MBps) [2024-12-14T01:21:39.636Z] Copying: 871/1024 [MB] (20 MBps) [2024-12-14T01:21:40.582Z] Copying: 891/1024 [MB] (20 MBps) [2024-12-14T01:21:41.527Z] Copying: 915/1024 [MB] (23 MBps) [2024-12-14T01:21:42.915Z] Copying: 933/1024 [MB] (17 MBps) [2024-12-14T01:21:43.860Z] Copying: 953/1024 [MB] (19 MBps) [2024-12-14T01:21:44.805Z] Copying: 968/1024 [MB] (15 MBps) [2024-12-14T01:21:45.750Z] Copying: 983/1024 [MB] (14 MBps) [2024-12-14T01:21:46.730Z] Copying: 1002/1024 [MB] (19 MBps) [2024-12-14T01:21:47.300Z] Copying: 1013/1024 [MB] (10 MBps) [2024-12-14T01:21:47.561Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-14 01:21:47.505674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.949 [2024-12-14 01:21:47.505768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:13.949 [2024-12-14 01:21:47.505785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:13.949 [2024-12-14 01:21:47.505802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.949 [2024-12-14 01:21:47.505829] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:13.949 [2024-12-14 01:21:47.506617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.949 [2024-12-14 01:21:47.506676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:13.949 [2024-12-14 01:21:47.506699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:22:13.949 [2024-12-14 01:21:47.506709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.949 [2024-12-14 01:21:47.506954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.949 [2024-12-14 01:21:47.506967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:13.949 [2024-12-14 01:21:47.506977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:22:13.949 [2024-12-14 01:21:47.506990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.949 [2024-12-14 01:21:47.510554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.949 [2024-12-14 01:21:47.510600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:13.949 [2024-12-14 01:21:47.510611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.547 ms 00:22:13.949 [2024-12-14 01:21:47.510628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.949 [2024-12-14 01:21:47.517662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.949 [2024-12-14 01:21:47.517712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:13.949 [2024-12-14 01:21:47.517723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.008 ms 00:22:13.949 [2024-12-14 01:21:47.517732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.949 [2024-12-14 01:21:47.520872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.949 [2024-12-14 01:21:47.520936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:13.949 [2024-12-14 01:21:47.520949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.051 ms 00:22:13.949 [2024-12-14 01:21:47.520957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.949 [2024-12-14 01:21:47.526984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.949 [2024-12-14 01:21:47.527074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:13.949 [2024-12-14 01:21:47.527097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.997 ms 00:22:13.949 [2024-12-14 01:21:47.527112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.949 [2024-12-14 01:21:47.527345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.949 [2024-12-14 01:21:47.527380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:13.949 [2024-12-14 01:21:47.527401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:22:13.950 [2024-12-14 01:21:47.527424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.950 [2024-12-14 01:21:47.531128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.950 [2024-12-14 01:21:47.531201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:13.950 [2024-12-14 01:21:47.531221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.672 ms 00:22:13.950 [2024-12-14 01:21:47.531238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.950 [2024-12-14 01:21:47.534225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.950 [2024-12-14 01:21:47.534281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:13.950 [2024-12-14 01:21:47.534292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.950 ms 00:22:13.950 [2024-12-14 01:21:47.534299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.950 [2024-12-14 01:21:47.536273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.950 [2024-12-14 01:21:47.536324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:13.950 [2024-12-14 01:21:47.536334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.950 ms 00:22:13.950 [2024-12-14 01:21:47.536341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.950 [2024-12-14 01:21:47.538670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.950 [2024-12-14 01:21:47.538719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:13.950 [2024-12-14 01:21:47.538728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.274 ms 00:22:13.950 [2024-12-14 01:21:47.538735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.950 [2024-12-14 01:21:47.538756] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:13.950 [2024-12-14 01:21:47.538773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.538993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:13.950 [2024-12-14 01:21:47.539374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:13.951 [2024-12-14 01:21:47.539578] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:13.951 [2024-12-14 01:21:47.539586] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6e33b643-f118-434f-b82d-1788ad8d0b55 00:22:13.951 [2024-12-14 01:21:47.539595] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:13.951 [2024-12-14 01:21:47.539603] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:13.951 [2024-12-14 01:21:47.539611] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:13.951 [2024-12-14 01:21:47.539634] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:13.951 [2024-12-14 01:21:47.539643] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:13.951 [2024-12-14 01:21:47.539651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:13.951 [2024-12-14 01:21:47.539660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:13.951 [2024-12-14 01:21:47.539666] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:13.951 [2024-12-14 01:21:47.539673] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:13.951 [2024-12-14 01:21:47.539682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.951 [2024-12-14 01:21:47.539709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:13.951 [2024-12-14 01:21:47.539719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:22:13.951 [2024-12-14 01:21:47.539727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.951 [2024-12-14 01:21:47.542009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.951 [2024-12-14 01:21:47.542049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:13.951 [2024-12-14 01:21:47.542060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.263 ms 00:22:13.951 [2024-12-14 01:21:47.542068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.951 [2024-12-14 01:21:47.542203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.951 [2024-12-14 01:21:47.542213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:13.951 [2024-12-14 01:21:47.542222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:22:13.951 [2024-12-14 01:21:47.542230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.951 [2024-12-14 01:21:47.549545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.951 [2024-12-14 01:21:47.549594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:13.951 [2024-12-14 01:21:47.549605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.951 [2024-12-14 01:21:47.549613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.951 [2024-12-14 01:21:47.549730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.951 [2024-12-14 01:21:47.549740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:13.951 [2024-12-14 01:21:47.549748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.951 [2024-12-14 01:21:47.549756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.951 [2024-12-14 01:21:47.549821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.951 [2024-12-14 01:21:47.549832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:13.951 [2024-12-14 01:21:47.549840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.951 [2024-12-14 01:21:47.549853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.951 [2024-12-14 01:21:47.549873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.951 [2024-12-14 01:21:47.549882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:13.951 [2024-12-14 01:21:47.549891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.951 [2024-12-14 01:21:47.549898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.211 [2024-12-14 01:21:47.563713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.211 [2024-12-14 01:21:47.563768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:14.211 [2024-12-14 01:21:47.563780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.211 [2024-12-14 01:21:47.563788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.211 [2024-12-14 01:21:47.575082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.211 [2024-12-14 01:21:47.575139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:14.211 [2024-12-14 01:21:47.575151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.211 [2024-12-14 01:21:47.575159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.211 [2024-12-14 01:21:47.575214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.211 [2024-12-14 01:21:47.575224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:14.211 [2024-12-14 01:21:47.575233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.211 [2024-12-14 01:21:47.575243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.211 [2024-12-14 01:21:47.575281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.211 [2024-12-14 01:21:47.575291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:14.211 [2024-12-14 01:21:47.575306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.211 [2024-12-14 01:21:47.575314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.211 [2024-12-14 01:21:47.575387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.211 [2024-12-14 01:21:47.575397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:14.211 [2024-12-14 01:21:47.575405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.211 [2024-12-14 01:21:47.575413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.211 [2024-12-14 01:21:47.575451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.211 [2024-12-14 01:21:47.575461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:14.211 [2024-12-14 01:21:47.575473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.211 [2024-12-14 01:21:47.575482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.211 [2024-12-14 01:21:47.575526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.211 [2024-12-14 01:21:47.575536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:14.211 [2024-12-14 01:21:47.575545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.211 [2024-12-14 01:21:47.575553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.211 [2024-12-14 01:21:47.575602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.211 [2024-12-14 01:21:47.575643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:14.211 [2024-12-14 01:21:47.575655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.211 [2024-12-14 01:21:47.575670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.211 [2024-12-14 01:21:47.575813] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.134 ms, result 0 00:22:14.211 00:22:14.211 00:22:14.211 01:21:47 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:16.751 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:16.751 01:21:49 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:16.751 [2024-12-14 01:21:49.881030] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:16.751 [2024-12-14 01:21:49.881287] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91164 ] 00:22:16.751 [2024-12-14 01:21:50.018011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.751 [2024-12-14 01:21:50.038659] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.751 [2024-12-14 01:21:50.150415] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:16.751 [2024-12-14 01:21:50.150510] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:16.751 [2024-12-14 01:21:50.310090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.751 [2024-12-14 01:21:50.310154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:16.751 [2024-12-14 01:21:50.310169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:16.751 [2024-12-14 01:21:50.310178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.751 [2024-12-14 01:21:50.310239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.751 [2024-12-14 01:21:50.310250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:16.751 [2024-12-14 01:21:50.310259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:16.751 [2024-12-14 01:21:50.310272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.751 [2024-12-14 01:21:50.310299] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:16.751 [2024-12-14 01:21:50.310962] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:16.751 [2024-12-14 01:21:50.311012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.751 [2024-12-14 01:21:50.311030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:16.751 [2024-12-14 01:21:50.311046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:22:16.751 [2024-12-14 01:21:50.311054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.751 [2024-12-14 01:21:50.312732] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:16.751 [2024-12-14 01:21:50.316513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.751 [2024-12-14 01:21:50.316567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:16.751 [2024-12-14 01:21:50.316578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.784 ms 00:22:16.751 [2024-12-14 01:21:50.316595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.751 [2024-12-14 01:21:50.316685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.751 [2024-12-14 01:21:50.316699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:16.751 [2024-12-14 01:21:50.316709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:16.751 [2024-12-14 01:21:50.316717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.751 [2024-12-14 01:21:50.324898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.751 [2024-12-14 01:21:50.324943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:16.751 [2024-12-14 01:21:50.324958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.139 ms 00:22:16.751 [2024-12-14 01:21:50.324967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.751 [2024-12-14 01:21:50.325060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.751 [2024-12-14 01:21:50.325070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:16.751 [2024-12-14 01:21:50.325080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:22:16.751 [2024-12-14 01:21:50.325087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.751 [2024-12-14 01:21:50.325153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.751 [2024-12-14 01:21:50.325164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:16.751 [2024-12-14 01:21:50.325173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:16.751 [2024-12-14 01:21:50.325185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.751 [2024-12-14 01:21:50.325208] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:16.751 [2024-12-14 01:21:50.327233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.751 [2024-12-14 01:21:50.327274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:16.751 [2024-12-14 01:21:50.327284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.031 ms 00:22:16.751 [2024-12-14 01:21:50.327292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.751 [2024-12-14 01:21:50.327329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.751 [2024-12-14 01:21:50.327338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:16.751 [2024-12-14 01:21:50.327346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:16.751 [2024-12-14 01:21:50.327357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.751 [2024-12-14 01:21:50.327382] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:16.751 [2024-12-14 01:21:50.327407] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:16.751 [2024-12-14 01:21:50.327452] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:16.751 [2024-12-14 01:21:50.327475] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:16.751 [2024-12-14 01:21:50.327581] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:16.751 [2024-12-14 01:21:50.327594] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:16.751 [2024-12-14 01:21:50.327609] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:16.751 [2024-12-14 01:21:50.327637] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:16.751 [2024-12-14 01:21:50.327651] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:16.751 [2024-12-14 01:21:50.327664] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:16.751 [2024-12-14 01:21:50.327672] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:16.751 [2024-12-14 01:21:50.327686] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:16.751 [2024-12-14 01:21:50.327694] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:16.751 [2024-12-14 01:21:50.327702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.751 [2024-12-14 01:21:50.327709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:16.751 [2024-12-14 01:21:50.327720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:22:16.751 [2024-12-14 01:21:50.327731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.751 [2024-12-14 01:21:50.327817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.751 [2024-12-14 01:21:50.327834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:16.751 [2024-12-14 01:21:50.327842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:16.751 [2024-12-14 01:21:50.327849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.751 [2024-12-14 01:21:50.327947] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:16.751 [2024-12-14 01:21:50.327958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:16.751 [2024-12-14 01:21:50.327968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:16.751 [2024-12-14 01:21:50.327977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.751 [2024-12-14 01:21:50.327986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:16.751 [2024-12-14 01:21:50.327994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:16.751 [2024-12-14 01:21:50.328001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:16.751 [2024-12-14 01:21:50.328010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:16.751 [2024-12-14 01:21:50.328018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:16.751 [2024-12-14 01:21:50.328026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:16.751 [2024-12-14 01:21:50.328033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:16.751 [2024-12-14 01:21:50.328044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:16.751 [2024-12-14 01:21:50.328052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:16.751 [2024-12-14 01:21:50.328059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:16.751 [2024-12-14 01:21:50.328067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:16.751 [2024-12-14 01:21:50.328076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.751 [2024-12-14 01:21:50.328085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:16.751 [2024-12-14 01:21:50.328094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:16.751 [2024-12-14 01:21:50.328102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.751 [2024-12-14 01:21:50.328110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:16.751 [2024-12-14 01:21:50.328118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:16.751 [2024-12-14 01:21:50.328126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:16.751 [2024-12-14 01:21:50.328134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:16.751 [2024-12-14 01:21:50.328142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:16.751 [2024-12-14 01:21:50.328150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:16.751 [2024-12-14 01:21:50.328158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:16.751 [2024-12-14 01:21:50.328165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:16.751 [2024-12-14 01:21:50.328177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:16.751 [2024-12-14 01:21:50.328186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:16.751 [2024-12-14 01:21:50.328194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:16.751 [2024-12-14 01:21:50.328201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:16.752 [2024-12-14 01:21:50.328209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:16.752 [2024-12-14 01:21:50.328217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:16.752 [2024-12-14 01:21:50.328225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:16.752 [2024-12-14 01:21:50.328233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:16.752 [2024-12-14 01:21:50.328241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:16.752 [2024-12-14 01:21:50.328249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:16.752 [2024-12-14 01:21:50.328256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:16.752 [2024-12-14 01:21:50.328264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:16.752 [2024-12-14 01:21:50.328272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.752 [2024-12-14 01:21:50.328280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:16.752 [2024-12-14 01:21:50.328288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:16.752 [2024-12-14 01:21:50.328296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.752 [2024-12-14 01:21:50.328306] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:16.752 [2024-12-14 01:21:50.328326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:16.752 [2024-12-14 01:21:50.328335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:16.752 [2024-12-14 01:21:50.328343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.752 [2024-12-14 01:21:50.328356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:16.752 [2024-12-14 01:21:50.328366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:16.752 [2024-12-14 01:21:50.328374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:16.752 [2024-12-14 01:21:50.328380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:16.752 [2024-12-14 01:21:50.328387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:16.752 [2024-12-14 01:21:50.328394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:16.752 [2024-12-14 01:21:50.328404] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:16.752 [2024-12-14 01:21:50.328414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:16.752 [2024-12-14 01:21:50.328422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:16.752 [2024-12-14 01:21:50.328429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:16.752 [2024-12-14 01:21:50.328436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:16.752 [2024-12-14 01:21:50.328443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:16.752 [2024-12-14 01:21:50.328452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:16.752 [2024-12-14 01:21:50.328459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:16.752 [2024-12-14 01:21:50.328466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:16.752 [2024-12-14 01:21:50.328473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:16.752 [2024-12-14 01:21:50.328480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:16.752 [2024-12-14 01:21:50.328492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:16.752 [2024-12-14 01:21:50.328500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:16.752 [2024-12-14 01:21:50.328506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:16.752 [2024-12-14 01:21:50.328513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:16.752 [2024-12-14 01:21:50.328520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:16.752 [2024-12-14 01:21:50.328527] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:16.752 [2024-12-14 01:21:50.328535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:16.752 [2024-12-14 01:21:50.328543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:16.752 [2024-12-14 01:21:50.328550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:16.752 [2024-12-14 01:21:50.328557] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:16.752 [2024-12-14 01:21:50.328565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:16.752 [2024-12-14 01:21:50.328574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.752 [2024-12-14 01:21:50.328583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:16.752 [2024-12-14 01:21:50.328591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:22:16.752 [2024-12-14 01:21:50.328605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.752 [2024-12-14 01:21:50.342533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.752 [2024-12-14 01:21:50.342591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:16.752 [2024-12-14 01:21:50.342601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.861 ms 00:22:16.752 [2024-12-14 01:21:50.342610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.752 [2024-12-14 01:21:50.342718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.752 [2024-12-14 01:21:50.342727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:16.752 [2024-12-14 01:21:50.342736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:16.752 [2024-12-14 01:21:50.342749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.012 [2024-12-14 01:21:50.364945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.013 [2024-12-14 01:21:50.365013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:17.013 [2024-12-14 01:21:50.365026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.138 ms 00:22:17.013 [2024-12-14 01:21:50.365034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.013 [2024-12-14 01:21:50.365083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.013 [2024-12-14 01:21:50.365094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:17.013 [2024-12-14 01:21:50.365102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:17.013 [2024-12-14 01:21:50.365111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.013 [2024-12-14 01:21:50.365785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.013 [2024-12-14 01:21:50.365838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:17.013 [2024-12-14 01:21:50.365852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:22:17.013 [2024-12-14 01:21:50.365863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.013 [2024-12-14 01:21:50.366041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.013 [2024-12-14 01:21:50.366059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:17.013 [2024-12-14 01:21:50.366070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:22:17.013 [2024-12-14 01:21:50.366079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.013 [2024-12-14 01:21:50.374267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.013 [2024-12-14 01:21:50.374318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:17.013 [2024-12-14 01:21:50.374338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.164 ms 00:22:17.013 [2024-12-14 01:21:50.374347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.013 [2024-12-14 01:21:50.378269] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:17.013 [2024-12-14 01:21:50.378325] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:17.013 [2024-12-14 01:21:50.378341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.013 [2024-12-14 01:21:50.378350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:17.013 [2024-12-14 01:21:50.378359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.882 ms 00:22:17.013 [2024-12-14 01:21:50.378366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.013 [2024-12-14 01:21:50.394213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.013 [2024-12-14 01:21:50.394264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:17.013 [2024-12-14 01:21:50.394281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.787 ms 00:22:17.013 [2024-12-14 01:21:50.394290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.013 [2024-12-14 01:21:50.397363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.013 [2024-12-14 01:21:50.397415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:17.013 [2024-12-14 01:21:50.397428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.017 ms 00:22:17.013 [2024-12-14 01:21:50.397436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.013 [2024-12-14 01:21:50.399884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.013 [2024-12-14 01:21:50.399931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:17.013 [2024-12-14 01:21:50.399942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.376 ms 00:22:17.013 [2024-12-14 01:21:50.399949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.013 [2024-12-14 01:21:50.400294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.013 [2024-12-14 01:21:50.400306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:17.013 [2024-12-14 01:21:50.400316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:22:17.013 [2024-12-14 01:21:50.400330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.013 [2024-12-14 01:21:50.425450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.013 [2024-12-14 01:21:50.425543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:17.013 [2024-12-14 01:21:50.425556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.095 ms 00:22:17.013 [2024-12-14 01:21:50.425566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.013 [2024-12-14 01:21:50.433658] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:17.013 [2024-12-14 01:21:50.436775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.013 [2024-12-14 01:21:50.436819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:17.013 [2024-12-14 01:21:50.436831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.156 ms 00:22:17.013 [2024-12-14 01:21:50.436846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.013 [2024-12-14 01:21:50.436924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.013 [2024-12-14 01:21:50.436937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:17.013 [2024-12-14 01:21:50.436958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:17.013 [2024-12-14 01:21:50.436967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.013 [2024-12-14 01:21:50.437038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.013 [2024-12-14 01:21:50.437052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:17.013 [2024-12-14 01:21:50.437062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:17.013 [2024-12-14 01:21:50.437070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.013 [2024-12-14 01:21:50.437090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.013 [2024-12-14 01:21:50.437099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:17.013 [2024-12-14 01:21:50.437108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:17.013 [2024-12-14 01:21:50.437116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.013 [2024-12-14 01:21:50.437161] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:17.013 [2024-12-14 01:21:50.437172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.013 [2024-12-14 01:21:50.437182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:17.013 [2024-12-14 01:21:50.437192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:17.013 [2024-12-14 01:21:50.437201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.013 [2024-12-14 01:21:50.442896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.013 [2024-12-14 01:21:50.442948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:17.013 [2024-12-14 01:21:50.442968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.676 ms 00:22:17.013 [2024-12-14 01:21:50.442977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.013 [2024-12-14 01:21:50.443060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.013 [2024-12-14 01:21:50.443071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:17.013 [2024-12-14 01:21:50.443080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:17.013 [2024-12-14 01:21:50.443094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.013 [2024-12-14 01:21:50.444707] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 134.121 ms, result 0 00:22:17.954  [2024-12-14T01:21:52.505Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-14T01:21:53.888Z] Copying: 29/1024 [MB] (11 MBps) [2024-12-14T01:21:54.457Z] Copying: 42/1024 [MB] (12 MBps) [2024-12-14T01:21:55.839Z] Copying: 55/1024 [MB] (12 MBps) [2024-12-14T01:21:56.779Z] Copying: 69/1024 [MB] (14 MBps) [2024-12-14T01:21:57.720Z] Copying: 88/1024 [MB] (18 MBps) [2024-12-14T01:21:58.661Z] Copying: 104/1024 [MB] (16 MBps) [2024-12-14T01:21:59.602Z] Copying: 115/1024 [MB] (11 MBps) [2024-12-14T01:22:00.542Z] Copying: 126/1024 [MB] (10 MBps) [2024-12-14T01:22:01.482Z] Copying: 136/1024 [MB] (10 MBps) [2024-12-14T01:22:02.867Z] Copying: 146/1024 [MB] (10 MBps) [2024-12-14T01:22:03.810Z] Copying: 157/1024 [MB] (10 MBps) [2024-12-14T01:22:04.753Z] Copying: 167/1024 [MB] (10 MBps) [2024-12-14T01:22:05.696Z] Copying: 177/1024 [MB] (10 MBps) [2024-12-14T01:22:06.640Z] Copying: 187/1024 [MB] (10 MBps) [2024-12-14T01:22:07.583Z] Copying: 197/1024 [MB] (10 MBps) [2024-12-14T01:22:08.523Z] Copying: 220/1024 [MB] (22 MBps) [2024-12-14T01:22:09.465Z] Copying: 258/1024 [MB] (38 MBps) [2024-12-14T01:22:10.854Z] Copying: 270/1024 [MB] (12 MBps) [2024-12-14T01:22:11.798Z] Copying: 281/1024 [MB] (10 MBps) [2024-12-14T01:22:12.783Z] Copying: 291/1024 [MB] (10 MBps) [2024-12-14T01:22:13.726Z] Copying: 301/1024 [MB] (10 MBps) [2024-12-14T01:22:14.670Z] Copying: 311/1024 [MB] (10 MBps) [2024-12-14T01:22:15.612Z] Copying: 324/1024 [MB] (12 MBps) [2024-12-14T01:22:16.556Z] Copying: 337/1024 [MB] (12 MBps) [2024-12-14T01:22:17.500Z] Copying: 348/1024 [MB] (10 MBps) [2024-12-14T01:22:18.887Z] Copying: 358/1024 [MB] (10 MBps) [2024-12-14T01:22:19.460Z] Copying: 393/1024 [MB] (34 MBps) [2024-12-14T01:22:20.849Z] Copying: 437/1024 [MB] (43 MBps) [2024-12-14T01:22:21.792Z] Copying: 452/1024 [MB] (15 MBps) [2024-12-14T01:22:22.736Z] Copying: 466/1024 [MB] (13 MBps) [2024-12-14T01:22:23.679Z] Copying: 478/1024 [MB] (12 MBps) [2024-12-14T01:22:24.621Z] Copying: 493/1024 [MB] (14 MBps) [2024-12-14T01:22:25.567Z] Copying: 533/1024 [MB] (39 MBps) [2024-12-14T01:22:26.508Z] Copying: 566/1024 [MB] (33 MBps) [2024-12-14T01:22:27.897Z] Copying: 583/1024 [MB] (17 MBps) [2024-12-14T01:22:28.469Z] Copying: 602/1024 [MB] (19 MBps) [2024-12-14T01:22:29.853Z] Copying: 620/1024 [MB] (17 MBps) [2024-12-14T01:22:30.794Z] Copying: 639/1024 [MB] (18 MBps) [2024-12-14T01:22:31.736Z] Copying: 649/1024 [MB] (10 MBps) [2024-12-14T01:22:32.678Z] Copying: 672/1024 [MB] (22 MBps) [2024-12-14T01:22:33.619Z] Copying: 692/1024 [MB] (20 MBps) [2024-12-14T01:22:34.600Z] Copying: 711/1024 [MB] (19 MBps) [2024-12-14T01:22:35.543Z] Copying: 732/1024 [MB] (21 MBps) [2024-12-14T01:22:36.485Z] Copying: 749/1024 [MB] (16 MBps) [2024-12-14T01:22:37.872Z] Copying: 769/1024 [MB] (20 MBps) [2024-12-14T01:22:38.814Z] Copying: 784/1024 [MB] (14 MBps) [2024-12-14T01:22:39.762Z] Copying: 803/1024 [MB] (19 MBps) [2024-12-14T01:22:40.705Z] Copying: 819/1024 [MB] (15 MBps) [2024-12-14T01:22:41.648Z] Copying: 834/1024 [MB] (15 MBps) [2024-12-14T01:22:42.590Z] Copying: 851/1024 [MB] (16 MBps) [2024-12-14T01:22:43.533Z] Copying: 902/1024 [MB] (51 MBps) [2024-12-14T01:22:44.475Z] Copying: 922/1024 [MB] (20 MBps) [2024-12-14T01:22:45.862Z] Copying: 933/1024 [MB] (10 MBps) [2024-12-14T01:22:46.807Z] Copying: 944/1024 [MB] (11 MBps) [2024-12-14T01:22:47.751Z] Copying: 960/1024 [MB] (15 MBps) [2024-12-14T01:22:48.695Z] Copying: 971/1024 [MB] (11 MBps) [2024-12-14T01:22:49.640Z] Copying: 985/1024 [MB] (13 MBps) [2024-12-14T01:22:50.584Z] Copying: 996/1024 [MB] (10 MBps) [2024-12-14T01:22:51.526Z] Copying: 1006/1024 [MB] (10 MBps) [2024-12-14T01:22:52.469Z] Copying: 1017/1024 [MB] (10 MBps) [2024-12-14T01:22:53.041Z] Copying: 1048036/1048576 [kB] (6520 kBps) [2024-12-14T01:22:53.041Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-14 01:22:52.978589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.429 [2024-12-14 01:22:52.978682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:19.429 [2024-12-14 01:22:52.978699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:19.429 [2024-12-14 01:22:52.978709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.429 [2024-12-14 01:22:52.982167] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:19.429 [2024-12-14 01:22:52.984310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.429 [2024-12-14 01:22:52.984390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:19.429 [2024-12-14 01:22:52.984409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.088 ms 00:23:19.429 [2024-12-14 01:22:52.984420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.429 [2024-12-14 01:22:52.998456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.429 [2024-12-14 01:22:52.998515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:19.429 [2024-12-14 01:22:52.998530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.881 ms 00:23:19.429 [2024-12-14 01:22:52.998540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.429 [2024-12-14 01:22:53.023492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.429 [2024-12-14 01:22:53.023549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:19.429 [2024-12-14 01:22:53.023561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.931 ms 00:23:19.429 [2024-12-14 01:22:53.023570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.429 [2024-12-14 01:22:53.029711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.429 [2024-12-14 01:22:53.029780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:19.429 [2024-12-14 01:22:53.029792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.090 ms 00:23:19.429 [2024-12-14 01:22:53.029802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.429 [2024-12-14 01:22:53.032725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.429 [2024-12-14 01:22:53.032776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:19.429 [2024-12-14 01:22:53.032787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.828 ms 00:23:19.429 [2024-12-14 01:22:53.032796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.429 [2024-12-14 01:22:53.037722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.429 [2024-12-14 01:22:53.037778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:19.429 [2024-12-14 01:22:53.037790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.879 ms 00:23:19.429 [2024-12-14 01:22:53.037809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.003 [2024-12-14 01:22:53.328809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.003 [2024-12-14 01:22:53.328881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:20.003 [2024-12-14 01:22:53.328895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 290.947 ms 00:23:20.003 [2024-12-14 01:22:53.328905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.003 [2024-12-14 01:22:53.332380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.003 [2024-12-14 01:22:53.332434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:20.003 [2024-12-14 01:22:53.332444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.457 ms 00:23:20.003 [2024-12-14 01:22:53.332451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.003 [2024-12-14 01:22:53.335268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.003 [2024-12-14 01:22:53.335318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:20.003 [2024-12-14 01:22:53.335328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.774 ms 00:23:20.003 [2024-12-14 01:22:53.335336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.003 [2024-12-14 01:22:53.337477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.003 [2024-12-14 01:22:53.337537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:20.003 [2024-12-14 01:22:53.337548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.099 ms 00:23:20.003 [2024-12-14 01:22:53.337555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.003 [2024-12-14 01:22:53.339990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.003 [2024-12-14 01:22:53.340036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:20.003 [2024-12-14 01:22:53.340045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.347 ms 00:23:20.003 [2024-12-14 01:22:53.340054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.003 [2024-12-14 01:22:53.340094] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:20.003 [2024-12-14 01:22:53.340121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 103168 / 261120 wr_cnt: 1 state: open 00:23:20.003 [2024-12-14 01:22:53.340132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:20.003 [2024-12-14 01:22:53.340141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:20.003 [2024-12-14 01:22:53.340149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:20.003 [2024-12-14 01:22:53.340163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:20.003 [2024-12-14 01:22:53.340172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:20.003 [2024-12-14 01:22:53.340182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:20.003 [2024-12-14 01:22:53.340190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:20.003 [2024-12-14 01:22:53.340199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:20.003 [2024-12-14 01:22:53.340207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:20.003 [2024-12-14 01:22:53.340215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:20.003 [2024-12-14 01:22:53.340223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:20.003 [2024-12-14 01:22:53.340231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:20.003 [2024-12-14 01:22:53.340239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:20.003 [2024-12-14 01:22:53.340248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:20.003 [2024-12-14 01:22:53.340256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:20.003 [2024-12-14 01:22:53.340263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:20.003 [2024-12-14 01:22:53.340271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:20.003 [2024-12-14 01:22:53.340279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:20.004 [2024-12-14 01:22:53.340973] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:20.004 [2024-12-14 01:22:53.340981] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6e33b643-f118-434f-b82d-1788ad8d0b55 00:23:20.004 [2024-12-14 01:22:53.340990] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 103168 00:23:20.004 [2024-12-14 01:22:53.341001] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 104128 00:23:20.004 [2024-12-14 01:22:53.341011] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 103168 00:23:20.004 [2024-12-14 01:22:53.341019] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0093 00:23:20.004 [2024-12-14 01:22:53.341026] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:20.004 [2024-12-14 01:22:53.341034] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:20.004 [2024-12-14 01:22:53.341046] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:20.004 [2024-12-14 01:22:53.341053] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:20.004 [2024-12-14 01:22:53.341060] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:20.004 [2024-12-14 01:22:53.341067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.004 [2024-12-14 01:22:53.341075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:20.005 [2024-12-14 01:22:53.341084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:23:20.005 [2024-12-14 01:22:53.341092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.005 [2024-12-14 01:22:53.343462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.005 [2024-12-14 01:22:53.343504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:20.005 [2024-12-14 01:22:53.343515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.351 ms 00:23:20.005 [2024-12-14 01:22:53.343523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.005 [2024-12-14 01:22:53.343670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.005 [2024-12-14 01:22:53.343682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:20.005 [2024-12-14 01:22:53.343692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:23:20.005 [2024-12-14 01:22:53.343706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.005 [2024-12-14 01:22:53.351255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.005 [2024-12-14 01:22:53.351310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:20.005 [2024-12-14 01:22:53.351322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.005 [2024-12-14 01:22:53.351330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.005 [2024-12-14 01:22:53.351389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.005 [2024-12-14 01:22:53.351399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:20.005 [2024-12-14 01:22:53.351407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.005 [2024-12-14 01:22:53.351420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.005 [2024-12-14 01:22:53.351468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.005 [2024-12-14 01:22:53.351479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:20.005 [2024-12-14 01:22:53.351488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.005 [2024-12-14 01:22:53.351496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.005 [2024-12-14 01:22:53.351512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.005 [2024-12-14 01:22:53.351520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:20.005 [2024-12-14 01:22:53.351528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.005 [2024-12-14 01:22:53.351536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.005 [2024-12-14 01:22:53.365251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.005 [2024-12-14 01:22:53.365312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:20.005 [2024-12-14 01:22:53.365323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.005 [2024-12-14 01:22:53.365332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.005 [2024-12-14 01:22:53.375808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.005 [2024-12-14 01:22:53.375860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:20.005 [2024-12-14 01:22:53.375871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.005 [2024-12-14 01:22:53.375881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.005 [2024-12-14 01:22:53.375938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.005 [2024-12-14 01:22:53.375948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:20.005 [2024-12-14 01:22:53.375957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.005 [2024-12-14 01:22:53.375965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.005 [2024-12-14 01:22:53.375999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.005 [2024-12-14 01:22:53.376011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:20.005 [2024-12-14 01:22:53.376020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.005 [2024-12-14 01:22:53.376029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.005 [2024-12-14 01:22:53.376103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.005 [2024-12-14 01:22:53.376117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:20.005 [2024-12-14 01:22:53.376126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.005 [2024-12-14 01:22:53.376133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.005 [2024-12-14 01:22:53.376162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.005 [2024-12-14 01:22:53.376172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:20.005 [2024-12-14 01:22:53.376180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.005 [2024-12-14 01:22:53.376188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.005 [2024-12-14 01:22:53.376227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.005 [2024-12-14 01:22:53.376239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:20.005 [2024-12-14 01:22:53.376247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.005 [2024-12-14 01:22:53.376256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.005 [2024-12-14 01:22:53.376307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.005 [2024-12-14 01:22:53.376318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:20.005 [2024-12-14 01:22:53.376326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.005 [2024-12-14 01:22:53.376341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.005 [2024-12-14 01:22:53.376472] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 398.390 ms, result 0 00:23:20.948 00:23:20.948 00:23:20.948 01:22:54 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:20.948 [2024-12-14 01:22:54.367674] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:20.948 [2024-12-14 01:22:54.367814] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91830 ] 00:23:20.948 [2024-12-14 01:22:54.515687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.949 [2024-12-14 01:22:54.545147] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.210 [2024-12-14 01:22:54.664143] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:21.210 [2024-12-14 01:22:54.664233] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:21.473 [2024-12-14 01:22:54.825414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.473 [2024-12-14 01:22:54.825695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:21.473 [2024-12-14 01:22:54.825730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:21.473 [2024-12-14 01:22:54.825740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.473 [2024-12-14 01:22:54.825816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.473 [2024-12-14 01:22:54.825828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:21.473 [2024-12-14 01:22:54.825840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:23:21.473 [2024-12-14 01:22:54.825854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.473 [2024-12-14 01:22:54.825886] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:21.473 [2024-12-14 01:22:54.826157] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:21.473 [2024-12-14 01:22:54.826175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.473 [2024-12-14 01:22:54.826188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:21.473 [2024-12-14 01:22:54.826201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:23:21.473 [2024-12-14 01:22:54.826210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.473 [2024-12-14 01:22:54.827949] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:21.473 [2024-12-14 01:22:54.831787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.473 [2024-12-14 01:22:54.831844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:21.473 [2024-12-14 01:22:54.831856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.835 ms 00:23:21.473 [2024-12-14 01:22:54.831870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.473 [2024-12-14 01:22:54.831946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.473 [2024-12-14 01:22:54.831956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:21.473 [2024-12-14 01:22:54.831965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:21.473 [2024-12-14 01:22:54.831973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.473 [2024-12-14 01:22:54.840344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.473 [2024-12-14 01:22:54.840390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:21.473 [2024-12-14 01:22:54.840411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.328 ms 00:23:21.473 [2024-12-14 01:22:54.840419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.473 [2024-12-14 01:22:54.840517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.473 [2024-12-14 01:22:54.840527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:21.473 [2024-12-14 01:22:54.840536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:23:21.473 [2024-12-14 01:22:54.840544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.473 [2024-12-14 01:22:54.840610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.473 [2024-12-14 01:22:54.840649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:21.473 [2024-12-14 01:22:54.840659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:21.473 [2024-12-14 01:22:54.840669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.473 [2024-12-14 01:22:54.840698] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:21.473 [2024-12-14 01:22:54.842850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.473 [2024-12-14 01:22:54.842892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:21.473 [2024-12-14 01:22:54.842906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.163 ms 00:23:21.473 [2024-12-14 01:22:54.842913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.473 [2024-12-14 01:22:54.842951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.473 [2024-12-14 01:22:54.842961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:21.473 [2024-12-14 01:22:54.842969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:21.474 [2024-12-14 01:22:54.842979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.474 [2024-12-14 01:22:54.843003] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:21.474 [2024-12-14 01:22:54.843025] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:21.474 [2024-12-14 01:22:54.843068] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:21.474 [2024-12-14 01:22:54.843098] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:21.474 [2024-12-14 01:22:54.843204] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:21.474 [2024-12-14 01:22:54.843215] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:21.474 [2024-12-14 01:22:54.843230] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:21.474 [2024-12-14 01:22:54.843240] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:21.474 [2024-12-14 01:22:54.843253] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:21.474 [2024-12-14 01:22:54.843261] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:21.474 [2024-12-14 01:22:54.843270] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:21.474 [2024-12-14 01:22:54.843278] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:21.474 [2024-12-14 01:22:54.843285] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:21.474 [2024-12-14 01:22:54.843293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.474 [2024-12-14 01:22:54.843302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:21.474 [2024-12-14 01:22:54.843310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:23:21.474 [2024-12-14 01:22:54.843317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.474 [2024-12-14 01:22:54.843407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.474 [2024-12-14 01:22:54.843415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:21.474 [2024-12-14 01:22:54.843423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:21.474 [2024-12-14 01:22:54.843431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.474 [2024-12-14 01:22:54.843531] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:21.474 [2024-12-14 01:22:54.843543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:21.474 [2024-12-14 01:22:54.843552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:21.474 [2024-12-14 01:22:54.843561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.474 [2024-12-14 01:22:54.843570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:21.474 [2024-12-14 01:22:54.843583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:21.474 [2024-12-14 01:22:54.843591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:21.474 [2024-12-14 01:22:54.843600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:21.474 [2024-12-14 01:22:54.843608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:21.474 [2024-12-14 01:22:54.843617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:21.474 [2024-12-14 01:22:54.843642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:21.474 [2024-12-14 01:22:54.843650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:21.474 [2024-12-14 01:22:54.843657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:21.474 [2024-12-14 01:22:54.843665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:21.474 [2024-12-14 01:22:54.843674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:21.474 [2024-12-14 01:22:54.843681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.474 [2024-12-14 01:22:54.843690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:21.474 [2024-12-14 01:22:54.843698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:21.474 [2024-12-14 01:22:54.843705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.474 [2024-12-14 01:22:54.843713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:21.474 [2024-12-14 01:22:54.843724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:21.474 [2024-12-14 01:22:54.843735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.474 [2024-12-14 01:22:54.843743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:21.474 [2024-12-14 01:22:54.843750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:21.474 [2024-12-14 01:22:54.843758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.474 [2024-12-14 01:22:54.843766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:21.474 [2024-12-14 01:22:54.843774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:21.474 [2024-12-14 01:22:54.843782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.474 [2024-12-14 01:22:54.843790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:21.474 [2024-12-14 01:22:54.843798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:21.474 [2024-12-14 01:22:54.843806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.474 [2024-12-14 01:22:54.843814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:21.474 [2024-12-14 01:22:54.843822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:21.474 [2024-12-14 01:22:54.843829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:21.474 [2024-12-14 01:22:54.843837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:21.474 [2024-12-14 01:22:54.843845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:21.474 [2024-12-14 01:22:54.843852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:21.474 [2024-12-14 01:22:54.843862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:21.474 [2024-12-14 01:22:54.843870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:21.474 [2024-12-14 01:22:54.843876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.474 [2024-12-14 01:22:54.843883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:21.474 [2024-12-14 01:22:54.843889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:21.474 [2024-12-14 01:22:54.843897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.474 [2024-12-14 01:22:54.843904] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:21.474 [2024-12-14 01:22:54.843914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:21.474 [2024-12-14 01:22:54.843922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:21.474 [2024-12-14 01:22:54.843930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.474 [2024-12-14 01:22:54.843939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:21.474 [2024-12-14 01:22:54.843945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:21.474 [2024-12-14 01:22:54.843951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:21.474 [2024-12-14 01:22:54.843958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:21.474 [2024-12-14 01:22:54.843964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:21.474 [2024-12-14 01:22:54.843974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:21.474 [2024-12-14 01:22:54.843985] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:21.474 [2024-12-14 01:22:54.843995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:21.474 [2024-12-14 01:22:54.844003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:21.474 [2024-12-14 01:22:54.844010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:21.474 [2024-12-14 01:22:54.844018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:21.474 [2024-12-14 01:22:54.844025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:21.474 [2024-12-14 01:22:54.844032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:21.474 [2024-12-14 01:22:54.844039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:21.474 [2024-12-14 01:22:54.844047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:21.474 [2024-12-14 01:22:54.844054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:21.474 [2024-12-14 01:22:54.844062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:21.474 [2024-12-14 01:22:54.844076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:21.474 [2024-12-14 01:22:54.844084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:21.474 [2024-12-14 01:22:54.844091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:21.474 [2024-12-14 01:22:54.844099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:21.474 [2024-12-14 01:22:54.844107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:21.474 [2024-12-14 01:22:54.844117] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:21.474 [2024-12-14 01:22:54.844126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:21.474 [2024-12-14 01:22:54.844135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:21.474 [2024-12-14 01:22:54.844143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:21.474 [2024-12-14 01:22:54.844150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:21.474 [2024-12-14 01:22:54.844157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:21.474 [2024-12-14 01:22:54.844165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.474 [2024-12-14 01:22:54.844172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:21.474 [2024-12-14 01:22:54.844181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:23:21.475 [2024-12-14 01:22:54.844190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.475 [2024-12-14 01:22:54.858948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.475 [2024-12-14 01:22:54.859132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:21.475 [2024-12-14 01:22:54.859189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.711 ms 00:23:21.475 [2024-12-14 01:22:54.859212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.475 [2024-12-14 01:22:54.859319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.475 [2024-12-14 01:22:54.859342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:21.475 [2024-12-14 01:22:54.859371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:21.475 [2024-12-14 01:22:54.859391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.475 [2024-12-14 01:22:54.883326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.475 [2024-12-14 01:22:54.883564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:21.475 [2024-12-14 01:22:54.883698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.859 ms 00:23:21.475 [2024-12-14 01:22:54.883740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.475 [2024-12-14 01:22:54.883823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.475 [2024-12-14 01:22:54.883865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:21.475 [2024-12-14 01:22:54.884087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:21.475 [2024-12-14 01:22:54.884129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.475 [2024-12-14 01:22:54.884795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.475 [2024-12-14 01:22:54.884978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:21.475 [2024-12-14 01:22:54.885062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:23:21.475 [2024-12-14 01:22:54.885100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.475 [2024-12-14 01:22:54.885406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.475 [2024-12-14 01:22:54.885448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:21.475 [2024-12-14 01:22:54.885582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:23:21.475 [2024-12-14 01:22:54.885619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.475 [2024-12-14 01:22:54.893897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.475 [2024-12-14 01:22:54.894041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:21.475 [2024-12-14 01:22:54.894136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.190 ms 00:23:21.475 [2024-12-14 01:22:54.894147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.475 [2024-12-14 01:22:54.897942] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:21.475 [2024-12-14 01:22:54.897989] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:21.475 [2024-12-14 01:22:54.898005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.475 [2024-12-14 01:22:54.898014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:21.475 [2024-12-14 01:22:54.898023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.737 ms 00:23:21.475 [2024-12-14 01:22:54.898030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.475 [2024-12-14 01:22:54.913414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.475 [2024-12-14 01:22:54.913480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:21.475 [2024-12-14 01:22:54.913515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.335 ms 00:23:21.475 [2024-12-14 01:22:54.913523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.475 [2024-12-14 01:22:54.916168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.475 [2024-12-14 01:22:54.916212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:21.475 [2024-12-14 01:22:54.916223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.589 ms 00:23:21.475 [2024-12-14 01:22:54.916230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.475 [2024-12-14 01:22:54.918476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.475 [2024-12-14 01:22:54.918520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:21.475 [2024-12-14 01:22:54.918529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.203 ms 00:23:21.475 [2024-12-14 01:22:54.918536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.475 [2024-12-14 01:22:54.918912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.475 [2024-12-14 01:22:54.918926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:21.475 [2024-12-14 01:22:54.918942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:23:21.475 [2024-12-14 01:22:54.918954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.475 [2024-12-14 01:22:54.941979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.475 [2024-12-14 01:22:54.942190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:21.475 [2024-12-14 01:22:54.942220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.003 ms 00:23:21.475 [2024-12-14 01:22:54.942233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.475 [2024-12-14 01:22:54.950213] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:21.475 [2024-12-14 01:22:54.953046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.475 [2024-12-14 01:22:54.953184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:21.475 [2024-12-14 01:22:54.953202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.768 ms 00:23:21.475 [2024-12-14 01:22:54.953210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.475 [2024-12-14 01:22:54.953296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.475 [2024-12-14 01:22:54.953308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:21.475 [2024-12-14 01:22:54.953326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:21.475 [2024-12-14 01:22:54.953344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.475 [2024-12-14 01:22:54.955111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.475 [2024-12-14 01:22:54.955158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:21.475 [2024-12-14 01:22:54.955168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.729 ms 00:23:21.475 [2024-12-14 01:22:54.955176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.475 [2024-12-14 01:22:54.955211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.475 [2024-12-14 01:22:54.955220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:21.475 [2024-12-14 01:22:54.955229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:21.475 [2024-12-14 01:22:54.955236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.475 [2024-12-14 01:22:54.955273] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:21.475 [2024-12-14 01:22:54.955285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.475 [2024-12-14 01:22:54.955293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:21.475 [2024-12-14 01:22:54.955305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:21.475 [2024-12-14 01:22:54.955317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.475 [2024-12-14 01:22:54.960691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.475 [2024-12-14 01:22:54.960841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:21.475 [2024-12-14 01:22:54.960898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.354 ms 00:23:21.475 [2024-12-14 01:22:54.960922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.475 [2024-12-14 01:22:54.961092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.475 [2024-12-14 01:22:54.961151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:21.475 [2024-12-14 01:22:54.961178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:21.475 [2024-12-14 01:22:54.961201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.475 [2024-12-14 01:22:54.962426] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.547 ms, result 0 00:23:22.862  [2024-12-14T01:22:57.471Z] Copying: 8696/1048576 [kB] (8696 kBps) [2024-12-14T01:22:58.415Z] Copying: 19/1024 [MB] (10 MBps) [2024-12-14T01:22:59.360Z] Copying: 29/1024 [MB] (10 MBps) [2024-12-14T01:23:00.301Z] Copying: 49/1024 [MB] (19 MBps) [2024-12-14T01:23:01.244Z] Copying: 68/1024 [MB] (18 MBps) [2024-12-14T01:23:02.188Z] Copying: 91/1024 [MB] (22 MBps) [2024-12-14T01:23:03.574Z] Copying: 111/1024 [MB] (19 MBps) [2024-12-14T01:23:04.517Z] Copying: 125/1024 [MB] (14 MBps) [2024-12-14T01:23:05.462Z] Copying: 145/1024 [MB] (19 MBps) [2024-12-14T01:23:06.403Z] Copying: 157/1024 [MB] (12 MBps) [2024-12-14T01:23:07.347Z] Copying: 169/1024 [MB] (11 MBps) [2024-12-14T01:23:08.290Z] Copying: 188/1024 [MB] (18 MBps) [2024-12-14T01:23:09.234Z] Copying: 210/1024 [MB] (21 MBps) [2024-12-14T01:23:10.178Z] Copying: 221/1024 [MB] (11 MBps) [2024-12-14T01:23:11.564Z] Copying: 231/1024 [MB] (10 MBps) [2024-12-14T01:23:12.508Z] Copying: 246/1024 [MB] (14 MBps) [2024-12-14T01:23:13.451Z] Copying: 263/1024 [MB] (17 MBps) [2024-12-14T01:23:14.393Z] Copying: 281/1024 [MB] (17 MBps) [2024-12-14T01:23:15.334Z] Copying: 301/1024 [MB] (19 MBps) [2024-12-14T01:23:16.277Z] Copying: 320/1024 [MB] (19 MBps) [2024-12-14T01:23:17.217Z] Copying: 337/1024 [MB] (17 MBps) [2024-12-14T01:23:18.157Z] Copying: 359/1024 [MB] (22 MBps) [2024-12-14T01:23:19.544Z] Copying: 370/1024 [MB] (10 MBps) [2024-12-14T01:23:20.549Z] Copying: 381/1024 [MB] (10 MBps) [2024-12-14T01:23:21.493Z] Copying: 391/1024 [MB] (10 MBps) [2024-12-14T01:23:22.437Z] Copying: 405/1024 [MB] (13 MBps) [2024-12-14T01:23:23.380Z] Copying: 415/1024 [MB] (10 MBps) [2024-12-14T01:23:24.324Z] Copying: 426/1024 [MB] (10 MBps) [2024-12-14T01:23:25.267Z] Copying: 442/1024 [MB] (16 MBps) [2024-12-14T01:23:26.210Z] Copying: 457/1024 [MB] (14 MBps) [2024-12-14T01:23:27.154Z] Copying: 471/1024 [MB] (13 MBps) [2024-12-14T01:23:28.540Z] Copying: 489/1024 [MB] (18 MBps) [2024-12-14T01:23:29.484Z] Copying: 504/1024 [MB] (14 MBps) [2024-12-14T01:23:30.427Z] Copying: 526/1024 [MB] (22 MBps) [2024-12-14T01:23:31.371Z] Copying: 545/1024 [MB] (18 MBps) [2024-12-14T01:23:32.316Z] Copying: 566/1024 [MB] (20 MBps) [2024-12-14T01:23:33.260Z] Copying: 583/1024 [MB] (17 MBps) [2024-12-14T01:23:34.200Z] Copying: 601/1024 [MB] (17 MBps) [2024-12-14T01:23:35.586Z] Copying: 621/1024 [MB] (20 MBps) [2024-12-14T01:23:36.157Z] Copying: 638/1024 [MB] (16 MBps) [2024-12-14T01:23:37.545Z] Copying: 660/1024 [MB] (21 MBps) [2024-12-14T01:23:38.487Z] Copying: 675/1024 [MB] (14 MBps) [2024-12-14T01:23:39.432Z] Copying: 711/1024 [MB] (35 MBps) [2024-12-14T01:23:40.376Z] Copying: 728/1024 [MB] (17 MBps) [2024-12-14T01:23:41.320Z] Copying: 740/1024 [MB] (11 MBps) [2024-12-14T01:23:42.262Z] Copying: 758/1024 [MB] (17 MBps) [2024-12-14T01:23:43.209Z] Copying: 773/1024 [MB] (14 MBps) [2024-12-14T01:23:44.201Z] Copying: 784/1024 [MB] (11 MBps) [2024-12-14T01:23:45.587Z] Copying: 795/1024 [MB] (10 MBps) [2024-12-14T01:23:46.159Z] Copying: 805/1024 [MB] (10 MBps) [2024-12-14T01:23:47.545Z] Copying: 816/1024 [MB] (10 MBps) [2024-12-14T01:23:48.488Z] Copying: 827/1024 [MB] (10 MBps) [2024-12-14T01:23:49.431Z] Copying: 859/1024 [MB] (31 MBps) [2024-12-14T01:23:50.374Z] Copying: 879/1024 [MB] (20 MBps) [2024-12-14T01:23:51.316Z] Copying: 894/1024 [MB] (15 MBps) [2024-12-14T01:23:52.258Z] Copying: 910/1024 [MB] (16 MBps) [2024-12-14T01:23:53.201Z] Copying: 933/1024 [MB] (22 MBps) [2024-12-14T01:23:54.586Z] Copying: 944/1024 [MB] (10 MBps) [2024-12-14T01:23:55.158Z] Copying: 958/1024 [MB] (13 MBps) [2024-12-14T01:23:56.544Z] Copying: 979/1024 [MB] (20 MBps) [2024-12-14T01:23:57.487Z] Copying: 993/1024 [MB] (13 MBps) [2024-12-14T01:23:58.430Z] Copying: 1003/1024 [MB] (10 MBps) [2024-12-14T01:23:59.002Z] Copying: 1014/1024 [MB] (11 MBps) [2024-12-14T01:23:59.575Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-14 01:23:59.283742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.963 [2024-12-14 01:23:59.283848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:25.963 [2024-12-14 01:23:59.283876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:25.963 [2024-12-14 01:23:59.283893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.963 [2024-12-14 01:23:59.283936] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:25.963 [2024-12-14 01:23:59.284872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.963 [2024-12-14 01:23:59.284908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:25.963 [2024-12-14 01:23:59.284939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:24:25.963 [2024-12-14 01:23:59.284956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.963 [2024-12-14 01:23:59.285403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.963 [2024-12-14 01:23:59.285433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:25.963 [2024-12-14 01:23:59.285459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:24:25.963 [2024-12-14 01:23:59.285495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.963 [2024-12-14 01:23:59.294679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.963 [2024-12-14 01:23:59.294939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:25.963 [2024-12-14 01:23:59.294970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.104 ms 00:24:25.963 [2024-12-14 01:23:59.294983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.963 [2024-12-14 01:23:59.302209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.963 [2024-12-14 01:23:59.302392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:25.963 [2024-12-14 01:23:59.302419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.149 ms 00:24:25.963 [2024-12-14 01:23:59.302433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.963 [2024-12-14 01:23:59.305190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.963 [2024-12-14 01:23:59.305241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:25.963 [2024-12-14 01:23:59.305253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.679 ms 00:24:25.963 [2024-12-14 01:23:59.305261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.963 [2024-12-14 01:23:59.309846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.963 [2024-12-14 01:23:59.310028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:25.963 [2024-12-14 01:23:59.310230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.541 ms 00:24:25.963 [2024-12-14 01:23:59.310277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.963 [2024-12-14 01:23:59.556820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.963 [2024-12-14 01:23:59.557021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:25.963 [2024-12-14 01:23:59.557043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 246.488 ms 00:24:25.963 [2024-12-14 01:23:59.557053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.963 [2024-12-14 01:23:59.559672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.963 [2024-12-14 01:23:59.559719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:25.963 [2024-12-14 01:23:59.559729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.593 ms 00:24:25.963 [2024-12-14 01:23:59.559736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.963 [2024-12-14 01:23:59.561769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.963 [2024-12-14 01:23:59.561816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:25.963 [2024-12-14 01:23:59.561826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.991 ms 00:24:25.963 [2024-12-14 01:23:59.561834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.963 [2024-12-14 01:23:59.563516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.963 [2024-12-14 01:23:59.563725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:25.963 [2024-12-14 01:23:59.563746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.640 ms 00:24:25.963 [2024-12-14 01:23:59.563754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.963 [2024-12-14 01:23:59.565262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.963 [2024-12-14 01:23:59.565305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:25.963 [2024-12-14 01:23:59.565315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.422 ms 00:24:25.963 [2024-12-14 01:23:59.565324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.963 [2024-12-14 01:23:59.565364] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:25.963 [2024-12-14 01:23:59.565381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:24:25.963 [2024-12-14 01:23:59.565394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:25.963 [2024-12-14 01:23:59.565403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:25.963 [2024-12-14 01:23:59.565413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:25.963 [2024-12-14 01:23:59.565423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:25.963 [2024-12-14 01:23:59.565432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:25.963 [2024-12-14 01:23:59.565441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:25.963 [2024-12-14 01:23:59.565451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:25.963 [2024-12-14 01:23:59.565459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:25.963 [2024-12-14 01:23:59.565468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:25.963 [2024-12-14 01:23:59.565477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:25.963 [2024-12-14 01:23:59.565486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:25.963 [2024-12-14 01:23:59.565495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:25.963 [2024-12-14 01:23:59.565504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:25.963 [2024-12-14 01:23:59.565527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:25.963 [2024-12-14 01:23:59.565536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:25.963 [2024-12-14 01:23:59.565545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:25.963 [2024-12-14 01:23:59.565554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:25.963 [2024-12-14 01:23:59.565563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.565998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:25.964 [2024-12-14 01:23:59.566263] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:25.964 [2024-12-14 01:23:59.566271] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6e33b643-f118-434f-b82d-1788ad8d0b55 00:24:25.964 [2024-12-14 01:23:59.566279] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:24:25.964 [2024-12-14 01:23:59.566291] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 28864 00:24:25.964 [2024-12-14 01:23:59.566302] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 27904 00:24:25.964 [2024-12-14 01:23:59.566311] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0344 00:24:25.964 [2024-12-14 01:23:59.566319] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:25.964 [2024-12-14 01:23:59.566328] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:25.964 [2024-12-14 01:23:59.566336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:25.964 [2024-12-14 01:23:59.566343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:25.964 [2024-12-14 01:23:59.566350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:25.964 [2024-12-14 01:23:59.566358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.964 [2024-12-14 01:23:59.566366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:25.965 [2024-12-14 01:23:59.566374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:24:25.965 [2024-12-14 01:23:59.566382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.965 [2024-12-14 01:23:59.568881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.965 [2024-12-14 01:23:59.568912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:25.965 [2024-12-14 01:23:59.568923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.480 ms 00:24:25.965 [2024-12-14 01:23:59.568932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.965 [2024-12-14 01:23:59.569051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.965 [2024-12-14 01:23:59.569059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:25.965 [2024-12-14 01:23:59.569072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:24:25.965 [2024-12-14 01:23:59.569080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.226 [2024-12-14 01:23:59.576721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.226 [2024-12-14 01:23:59.576906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:26.226 [2024-12-14 01:23:59.577095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.226 [2024-12-14 01:23:59.577140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.226 [2024-12-14 01:23:59.577219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.226 [2024-12-14 01:23:59.577341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:26.226 [2024-12-14 01:23:59.577856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.226 [2024-12-14 01:23:59.577971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.226 [2024-12-14 01:23:59.578112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.226 [2024-12-14 01:23:59.578219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:26.226 [2024-12-14 01:23:59.578261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.226 [2024-12-14 01:23:59.578283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.226 [2024-12-14 01:23:59.578382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.226 [2024-12-14 01:23:59.578411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:26.226 [2024-12-14 01:23:59.578431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.226 [2024-12-14 01:23:59.578450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.226 [2024-12-14 01:23:59.591922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.226 [2024-12-14 01:23:59.592124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:26.226 [2024-12-14 01:23:59.592554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.226 [2024-12-14 01:23:59.592581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.226 [2024-12-14 01:23:59.602849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.226 [2024-12-14 01:23:59.603042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:26.226 [2024-12-14 01:23:59.603061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.226 [2024-12-14 01:23:59.603071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.226 [2024-12-14 01:23:59.603166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.226 [2024-12-14 01:23:59.603178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:26.226 [2024-12-14 01:23:59.603187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.226 [2024-12-14 01:23:59.603195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.226 [2024-12-14 01:23:59.603232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.226 [2024-12-14 01:23:59.603242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:26.226 [2024-12-14 01:23:59.603256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.226 [2024-12-14 01:23:59.603264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.226 [2024-12-14 01:23:59.603340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.226 [2024-12-14 01:23:59.603353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:26.226 [2024-12-14 01:23:59.603362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.226 [2024-12-14 01:23:59.603371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.226 [2024-12-14 01:23:59.603401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.226 [2024-12-14 01:23:59.603415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:26.226 [2024-12-14 01:23:59.603423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.226 [2024-12-14 01:23:59.603432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.226 [2024-12-14 01:23:59.603473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.226 [2024-12-14 01:23:59.603486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:26.226 [2024-12-14 01:23:59.603494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.226 [2024-12-14 01:23:59.603502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.226 [2024-12-14 01:23:59.603547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.226 [2024-12-14 01:23:59.603557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:26.226 [2024-12-14 01:23:59.603566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.226 [2024-12-14 01:23:59.603575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.226 [2024-12-14 01:23:59.603738] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 319.980 ms, result 0 00:24:26.227 00:24:26.227 00:24:26.227 01:23:59 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:28.775 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:28.775 01:24:02 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:28.775 01:24:02 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:28.775 01:24:02 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:28.775 01:24:02 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:28.775 01:24:02 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:28.775 Process with pid 89742 is not found 00:24:28.775 Remove shared memory files 00:24:28.775 01:24:02 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 89742 00:24:28.775 01:24:02 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 89742 ']' 00:24:28.775 01:24:02 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 89742 00:24:28.775 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89742) - No such process 00:24:28.775 01:24:02 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 89742 is not found' 00:24:28.775 01:24:02 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:28.775 01:24:02 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:28.775 01:24:02 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:28.775 01:24:02 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:28.775 01:24:02 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:28.775 01:24:02 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:28.775 01:24:02 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:28.775 ************************************ 00:24:28.775 END TEST ftl_restore 00:24:28.775 ************************************ 00:24:28.775 00:24:28.775 real 4m28.547s 00:24:28.775 user 4m16.284s 00:24:28.775 sys 0m11.947s 00:24:28.775 01:24:02 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:28.775 01:24:02 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:28.775 01:24:02 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:28.775 01:24:02 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:28.775 01:24:02 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:28.775 01:24:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:28.775 ************************************ 00:24:28.775 START TEST ftl_dirty_shutdown 00:24:28.775 ************************************ 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:28.775 * Looking for test storage... 00:24:28.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.775 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:28.776 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:28.776 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.776 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:28.776 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.776 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.776 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.776 01:24:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:28.776 01:24:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.776 01:24:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:28.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.776 --rc genhtml_branch_coverage=1 00:24:28.776 --rc genhtml_function_coverage=1 00:24:28.776 --rc genhtml_legend=1 00:24:28.776 --rc geninfo_all_blocks=1 00:24:28.776 --rc geninfo_unexecuted_blocks=1 00:24:28.776 00:24:28.776 ' 00:24:28.776 01:24:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:28.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.776 --rc genhtml_branch_coverage=1 00:24:28.776 --rc genhtml_function_coverage=1 00:24:28.776 --rc genhtml_legend=1 00:24:28.776 --rc geninfo_all_blocks=1 00:24:28.776 --rc geninfo_unexecuted_blocks=1 00:24:28.776 00:24:28.776 ' 00:24:28.776 01:24:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:28.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.776 --rc genhtml_branch_coverage=1 00:24:28.776 --rc genhtml_function_coverage=1 00:24:28.776 --rc genhtml_legend=1 00:24:28.776 --rc geninfo_all_blocks=1 00:24:28.776 --rc geninfo_unexecuted_blocks=1 00:24:28.776 00:24:28.776 ' 00:24:28.776 01:24:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:28.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.776 --rc genhtml_branch_coverage=1 00:24:28.776 --rc genhtml_function_coverage=1 00:24:28.776 --rc genhtml_legend=1 00:24:28.776 --rc geninfo_all_blocks=1 00:24:28.776 --rc geninfo_unexecuted_blocks=1 00:24:28.776 00:24:28.776 ' 00:24:28.776 01:24:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:28.776 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:29.037 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:29.037 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:29.037 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:29.037 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:29.037 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:29.037 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:29.037 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:29.037 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:29.037 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:29.037 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:29.037 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:29.037 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:29.037 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:29.037 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:29.037 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:29.037 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:29.037 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:29.037 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:29.037 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=92583 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 92583 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 92583 ']' 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:29.038 01:24:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:29.038 [2024-12-14 01:24:02.483129] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:29.038 [2024-12-14 01:24:02.483529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92583 ] 00:24:29.038 [2024-12-14 01:24:02.632005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.299 [2024-12-14 01:24:02.662811] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.873 01:24:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.873 01:24:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:24:29.873 01:24:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:29.873 01:24:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:29.873 01:24:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:29.873 01:24:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:29.873 01:24:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:29.873 01:24:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:30.134 01:24:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:30.134 01:24:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:30.134 01:24:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:30.134 01:24:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:30.134 01:24:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:30.134 01:24:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:30.134 01:24:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:30.134 01:24:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:30.396 01:24:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:30.396 { 00:24:30.396 "name": "nvme0n1", 00:24:30.396 "aliases": [ 00:24:30.396 "4ca0292e-88ba-4bbc-9f0a-44d366b323cd" 00:24:30.396 ], 00:24:30.396 "product_name": "NVMe disk", 00:24:30.396 "block_size": 4096, 00:24:30.396 "num_blocks": 1310720, 00:24:30.396 "uuid": "4ca0292e-88ba-4bbc-9f0a-44d366b323cd", 00:24:30.396 "numa_id": -1, 00:24:30.396 "assigned_rate_limits": { 00:24:30.396 "rw_ios_per_sec": 0, 00:24:30.396 "rw_mbytes_per_sec": 0, 00:24:30.396 "r_mbytes_per_sec": 0, 00:24:30.396 "w_mbytes_per_sec": 0 00:24:30.396 }, 00:24:30.396 "claimed": true, 00:24:30.396 "claim_type": "read_many_write_one", 00:24:30.396 "zoned": false, 00:24:30.396 "supported_io_types": { 00:24:30.396 "read": true, 00:24:30.396 "write": true, 00:24:30.396 "unmap": true, 00:24:30.396 "flush": true, 00:24:30.396 "reset": true, 00:24:30.396 "nvme_admin": true, 00:24:30.396 "nvme_io": true, 00:24:30.396 "nvme_io_md": false, 00:24:30.396 "write_zeroes": true, 00:24:30.396 "zcopy": false, 00:24:30.396 "get_zone_info": false, 00:24:30.396 "zone_management": false, 00:24:30.396 "zone_append": false, 00:24:30.396 "compare": true, 00:24:30.396 "compare_and_write": false, 00:24:30.396 "abort": true, 00:24:30.396 "seek_hole": false, 00:24:30.396 "seek_data": false, 00:24:30.396 "copy": true, 00:24:30.396 "nvme_iov_md": false 00:24:30.396 }, 00:24:30.396 "driver_specific": { 00:24:30.396 "nvme": [ 00:24:30.396 { 00:24:30.396 "pci_address": "0000:00:11.0", 00:24:30.396 "trid": { 00:24:30.396 "trtype": "PCIe", 00:24:30.396 "traddr": "0000:00:11.0" 00:24:30.396 }, 00:24:30.396 "ctrlr_data": { 00:24:30.396 "cntlid": 0, 00:24:30.396 "vendor_id": "0x1b36", 00:24:30.396 "model_number": "QEMU NVMe Ctrl", 00:24:30.396 "serial_number": "12341", 00:24:30.396 "firmware_revision": "8.0.0", 00:24:30.396 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:30.396 "oacs": { 00:24:30.396 "security": 0, 00:24:30.396 "format": 1, 00:24:30.396 "firmware": 0, 00:24:30.396 "ns_manage": 1 00:24:30.396 }, 00:24:30.396 "multi_ctrlr": false, 00:24:30.396 "ana_reporting": false 00:24:30.396 }, 00:24:30.396 "vs": { 00:24:30.396 "nvme_version": "1.4" 00:24:30.396 }, 00:24:30.396 "ns_data": { 00:24:30.396 "id": 1, 00:24:30.396 "can_share": false 00:24:30.396 } 00:24:30.396 } 00:24:30.396 ], 00:24:30.396 "mp_policy": "active_passive" 00:24:30.396 } 00:24:30.396 } 00:24:30.396 ]' 00:24:30.396 01:24:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:30.396 01:24:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:30.396 01:24:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:30.396 01:24:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:30.396 01:24:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:30.396 01:24:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:24:30.396 01:24:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:30.396 01:24:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:30.396 01:24:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:30.396 01:24:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:30.396 01:24:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:30.658 01:24:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=8c4a9c13-655c-4aa2-b846-c38fb1f62718 00:24:30.658 01:24:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:30.658 01:24:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8c4a9c13-655c-4aa2-b846-c38fb1f62718 00:24:30.919 01:24:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:31.181 01:24:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=60166dcf-c8f1-4d4a-88af-d4b046f75c24 00:24:31.181 01:24:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 60166dcf-c8f1-4d4a-88af-d4b046f75c24 00:24:31.442 01:24:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=9f52e9af-6e3a-48ce-81a1-9e3efdef132b 00:24:31.442 01:24:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:31.442 01:24:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9f52e9af-6e3a-48ce-81a1-9e3efdef132b 00:24:31.442 01:24:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:31.442 01:24:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:31.442 01:24:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=9f52e9af-6e3a-48ce-81a1-9e3efdef132b 00:24:31.442 01:24:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:31.442 01:24:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 9f52e9af-6e3a-48ce-81a1-9e3efdef132b 00:24:31.442 01:24:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=9f52e9af-6e3a-48ce-81a1-9e3efdef132b 00:24:31.442 01:24:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:31.442 01:24:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:31.442 01:24:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:31.442 01:24:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9f52e9af-6e3a-48ce-81a1-9e3efdef132b 00:24:31.702 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:31.702 { 00:24:31.702 "name": "9f52e9af-6e3a-48ce-81a1-9e3efdef132b", 00:24:31.702 "aliases": [ 00:24:31.702 "lvs/nvme0n1p0" 00:24:31.702 ], 00:24:31.702 "product_name": "Logical Volume", 00:24:31.702 "block_size": 4096, 00:24:31.702 "num_blocks": 26476544, 00:24:31.702 "uuid": "9f52e9af-6e3a-48ce-81a1-9e3efdef132b", 00:24:31.702 "assigned_rate_limits": { 00:24:31.702 "rw_ios_per_sec": 0, 00:24:31.702 "rw_mbytes_per_sec": 0, 00:24:31.702 "r_mbytes_per_sec": 0, 00:24:31.702 "w_mbytes_per_sec": 0 00:24:31.702 }, 00:24:31.702 "claimed": false, 00:24:31.702 "zoned": false, 00:24:31.702 "supported_io_types": { 00:24:31.702 "read": true, 00:24:31.702 "write": true, 00:24:31.702 "unmap": true, 00:24:31.702 "flush": false, 00:24:31.702 "reset": true, 00:24:31.702 "nvme_admin": false, 00:24:31.702 "nvme_io": false, 00:24:31.702 "nvme_io_md": false, 00:24:31.702 "write_zeroes": true, 00:24:31.702 "zcopy": false, 00:24:31.702 "get_zone_info": false, 00:24:31.702 "zone_management": false, 00:24:31.702 "zone_append": false, 00:24:31.702 "compare": false, 00:24:31.702 "compare_and_write": false, 00:24:31.702 "abort": false, 00:24:31.702 "seek_hole": true, 00:24:31.702 "seek_data": true, 00:24:31.702 "copy": false, 00:24:31.702 "nvme_iov_md": false 00:24:31.702 }, 00:24:31.702 "driver_specific": { 00:24:31.702 "lvol": { 00:24:31.702 "lvol_store_uuid": "60166dcf-c8f1-4d4a-88af-d4b046f75c24", 00:24:31.702 "base_bdev": "nvme0n1", 00:24:31.702 "thin_provision": true, 00:24:31.702 "num_allocated_clusters": 0, 00:24:31.702 "snapshot": false, 00:24:31.702 "clone": false, 00:24:31.702 "esnap_clone": false 00:24:31.702 } 00:24:31.702 } 00:24:31.702 } 00:24:31.702 ]' 00:24:31.702 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:31.702 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:31.702 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:31.702 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:31.702 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:31.702 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:31.702 01:24:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:31.702 01:24:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:31.702 01:24:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:31.963 01:24:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:31.963 01:24:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:31.963 01:24:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 9f52e9af-6e3a-48ce-81a1-9e3efdef132b 00:24:31.963 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=9f52e9af-6e3a-48ce-81a1-9e3efdef132b 00:24:31.963 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:31.963 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:31.963 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:31.963 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9f52e9af-6e3a-48ce-81a1-9e3efdef132b 00:24:32.262 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:32.262 { 00:24:32.262 "name": "9f52e9af-6e3a-48ce-81a1-9e3efdef132b", 00:24:32.262 "aliases": [ 00:24:32.262 "lvs/nvme0n1p0" 00:24:32.262 ], 00:24:32.262 "product_name": "Logical Volume", 00:24:32.262 "block_size": 4096, 00:24:32.262 "num_blocks": 26476544, 00:24:32.262 "uuid": "9f52e9af-6e3a-48ce-81a1-9e3efdef132b", 00:24:32.262 "assigned_rate_limits": { 00:24:32.262 "rw_ios_per_sec": 0, 00:24:32.262 "rw_mbytes_per_sec": 0, 00:24:32.262 "r_mbytes_per_sec": 0, 00:24:32.262 "w_mbytes_per_sec": 0 00:24:32.262 }, 00:24:32.262 "claimed": false, 00:24:32.262 "zoned": false, 00:24:32.262 "supported_io_types": { 00:24:32.262 "read": true, 00:24:32.262 "write": true, 00:24:32.262 "unmap": true, 00:24:32.262 "flush": false, 00:24:32.262 "reset": true, 00:24:32.262 "nvme_admin": false, 00:24:32.262 "nvme_io": false, 00:24:32.262 "nvme_io_md": false, 00:24:32.262 "write_zeroes": true, 00:24:32.262 "zcopy": false, 00:24:32.262 "get_zone_info": false, 00:24:32.262 "zone_management": false, 00:24:32.262 "zone_append": false, 00:24:32.262 "compare": false, 00:24:32.262 "compare_and_write": false, 00:24:32.262 "abort": false, 00:24:32.262 "seek_hole": true, 00:24:32.262 "seek_data": true, 00:24:32.262 "copy": false, 00:24:32.262 "nvme_iov_md": false 00:24:32.262 }, 00:24:32.262 "driver_specific": { 00:24:32.262 "lvol": { 00:24:32.262 "lvol_store_uuid": "60166dcf-c8f1-4d4a-88af-d4b046f75c24", 00:24:32.262 "base_bdev": "nvme0n1", 00:24:32.262 "thin_provision": true, 00:24:32.262 "num_allocated_clusters": 0, 00:24:32.262 "snapshot": false, 00:24:32.262 "clone": false, 00:24:32.262 "esnap_clone": false 00:24:32.262 } 00:24:32.262 } 00:24:32.262 } 00:24:32.262 ]' 00:24:32.262 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:32.262 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:32.262 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:32.262 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:32.262 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:32.262 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:32.262 01:24:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:32.262 01:24:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:32.262 01:24:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:32.262 01:24:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 9f52e9af-6e3a-48ce-81a1-9e3efdef132b 00:24:32.262 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=9f52e9af-6e3a-48ce-81a1-9e3efdef132b 00:24:32.262 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:32.262 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:32.262 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:32.262 01:24:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9f52e9af-6e3a-48ce-81a1-9e3efdef132b 00:24:32.542 01:24:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:32.542 { 00:24:32.542 "name": "9f52e9af-6e3a-48ce-81a1-9e3efdef132b", 00:24:32.542 "aliases": [ 00:24:32.542 "lvs/nvme0n1p0" 00:24:32.542 ], 00:24:32.542 "product_name": "Logical Volume", 00:24:32.542 "block_size": 4096, 00:24:32.542 "num_blocks": 26476544, 00:24:32.542 "uuid": "9f52e9af-6e3a-48ce-81a1-9e3efdef132b", 00:24:32.542 "assigned_rate_limits": { 00:24:32.542 "rw_ios_per_sec": 0, 00:24:32.542 "rw_mbytes_per_sec": 0, 00:24:32.542 "r_mbytes_per_sec": 0, 00:24:32.542 "w_mbytes_per_sec": 0 00:24:32.542 }, 00:24:32.542 "claimed": false, 00:24:32.542 "zoned": false, 00:24:32.542 "supported_io_types": { 00:24:32.542 "read": true, 00:24:32.542 "write": true, 00:24:32.542 "unmap": true, 00:24:32.542 "flush": false, 00:24:32.542 "reset": true, 00:24:32.542 "nvme_admin": false, 00:24:32.542 "nvme_io": false, 00:24:32.542 "nvme_io_md": false, 00:24:32.542 "write_zeroes": true, 00:24:32.542 "zcopy": false, 00:24:32.542 "get_zone_info": false, 00:24:32.542 "zone_management": false, 00:24:32.542 "zone_append": false, 00:24:32.542 "compare": false, 00:24:32.542 "compare_and_write": false, 00:24:32.542 "abort": false, 00:24:32.542 "seek_hole": true, 00:24:32.542 "seek_data": true, 00:24:32.542 "copy": false, 00:24:32.542 "nvme_iov_md": false 00:24:32.542 }, 00:24:32.542 "driver_specific": { 00:24:32.542 "lvol": { 00:24:32.542 "lvol_store_uuid": "60166dcf-c8f1-4d4a-88af-d4b046f75c24", 00:24:32.542 "base_bdev": "nvme0n1", 00:24:32.542 "thin_provision": true, 00:24:32.542 "num_allocated_clusters": 0, 00:24:32.542 "snapshot": false, 00:24:32.542 "clone": false, 00:24:32.542 "esnap_clone": false 00:24:32.542 } 00:24:32.542 } 00:24:32.542 } 00:24:32.542 ]' 00:24:32.542 01:24:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:32.542 01:24:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:32.542 01:24:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:32.542 01:24:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:32.542 01:24:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:32.542 01:24:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:32.542 01:24:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:32.542 01:24:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 9f52e9af-6e3a-48ce-81a1-9e3efdef132b --l2p_dram_limit 10' 00:24:32.542 01:24:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:32.542 01:24:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:32.542 01:24:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:32.542 01:24:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9f52e9af-6e3a-48ce-81a1-9e3efdef132b --l2p_dram_limit 10 -c nvc0n1p0 00:24:32.804 [2024-12-14 01:24:06.303894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.804 [2024-12-14 01:24:06.303937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:32.804 [2024-12-14 01:24:06.303947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:32.804 [2024-12-14 01:24:06.303955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.804 [2024-12-14 01:24:06.304000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.804 [2024-12-14 01:24:06.304009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:32.804 [2024-12-14 01:24:06.304017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:32.804 [2024-12-14 01:24:06.304026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.804 [2024-12-14 01:24:06.304040] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:32.804 [2024-12-14 01:24:06.304315] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:32.804 [2024-12-14 01:24:06.304327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.804 [2024-12-14 01:24:06.304335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:32.804 [2024-12-14 01:24:06.304342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:24:32.804 [2024-12-14 01:24:06.304349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.804 [2024-12-14 01:24:06.304374] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 38b9c08e-308c-4ea7-a0bc-535ef4b9d616 00:24:32.804 [2024-12-14 01:24:06.305346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.804 [2024-12-14 01:24:06.305370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:32.804 [2024-12-14 01:24:06.305387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:24:32.804 [2024-12-14 01:24:06.305393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.804 [2024-12-14 01:24:06.310157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.804 [2024-12-14 01:24:06.310186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:32.804 [2024-12-14 01:24:06.310195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.706 ms 00:24:32.804 [2024-12-14 01:24:06.310201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.804 [2024-12-14 01:24:06.310262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.804 [2024-12-14 01:24:06.310269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:32.804 [2024-12-14 01:24:06.310279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:32.804 [2024-12-14 01:24:06.310284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.804 [2024-12-14 01:24:06.310320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.804 [2024-12-14 01:24:06.310330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:32.804 [2024-12-14 01:24:06.310338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:32.804 [2024-12-14 01:24:06.310343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.804 [2024-12-14 01:24:06.310361] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:32.804 [2024-12-14 01:24:06.311640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.804 [2024-12-14 01:24:06.311668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:32.804 [2024-12-14 01:24:06.311675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.285 ms 00:24:32.804 [2024-12-14 01:24:06.311682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.804 [2024-12-14 01:24:06.311709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.804 [2024-12-14 01:24:06.311718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:32.804 [2024-12-14 01:24:06.311725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:32.804 [2024-12-14 01:24:06.311734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.804 [2024-12-14 01:24:06.311752] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:32.804 [2024-12-14 01:24:06.311868] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:32.804 [2024-12-14 01:24:06.311877] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:32.804 [2024-12-14 01:24:06.311893] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:32.804 [2024-12-14 01:24:06.311903] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:32.804 [2024-12-14 01:24:06.311915] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:32.804 [2024-12-14 01:24:06.311921] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:32.804 [2024-12-14 01:24:06.311929] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:32.804 [2024-12-14 01:24:06.311935] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:32.804 [2024-12-14 01:24:06.311941] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:32.804 [2024-12-14 01:24:06.311947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.804 [2024-12-14 01:24:06.311954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:32.804 [2024-12-14 01:24:06.311959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:24:32.804 [2024-12-14 01:24:06.311967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.804 [2024-12-14 01:24:06.312031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.804 [2024-12-14 01:24:06.312041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:32.804 [2024-12-14 01:24:06.312046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:32.804 [2024-12-14 01:24:06.312056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.804 [2024-12-14 01:24:06.312131] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:32.804 [2024-12-14 01:24:06.312141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:32.804 [2024-12-14 01:24:06.312147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:32.804 [2024-12-14 01:24:06.312156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.804 [2024-12-14 01:24:06.312164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:32.804 [2024-12-14 01:24:06.312171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:32.804 [2024-12-14 01:24:06.312176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:32.804 [2024-12-14 01:24:06.312182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:32.804 [2024-12-14 01:24:06.312187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:32.804 [2024-12-14 01:24:06.312194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:32.804 [2024-12-14 01:24:06.312198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:32.804 [2024-12-14 01:24:06.312205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:32.804 [2024-12-14 01:24:06.312210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:32.804 [2024-12-14 01:24:06.312218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:32.804 [2024-12-14 01:24:06.312223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:32.804 [2024-12-14 01:24:06.312231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.804 [2024-12-14 01:24:06.312236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:32.804 [2024-12-14 01:24:06.312243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:32.804 [2024-12-14 01:24:06.312247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.804 [2024-12-14 01:24:06.312254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:32.805 [2024-12-14 01:24:06.312259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:32.805 [2024-12-14 01:24:06.312266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:32.805 [2024-12-14 01:24:06.312271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:32.805 [2024-12-14 01:24:06.312278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:32.805 [2024-12-14 01:24:06.312284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:32.805 [2024-12-14 01:24:06.312291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:32.805 [2024-12-14 01:24:06.312296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:32.805 [2024-12-14 01:24:06.312303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:32.805 [2024-12-14 01:24:06.312309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:32.805 [2024-12-14 01:24:06.312317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:32.805 [2024-12-14 01:24:06.312323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:32.805 [2024-12-14 01:24:06.312329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:32.805 [2024-12-14 01:24:06.312335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:32.805 [2024-12-14 01:24:06.312342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:32.805 [2024-12-14 01:24:06.312348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:32.805 [2024-12-14 01:24:06.312354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:32.805 [2024-12-14 01:24:06.312360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:32.805 [2024-12-14 01:24:06.312367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:32.805 [2024-12-14 01:24:06.312372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:32.805 [2024-12-14 01:24:06.312379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.805 [2024-12-14 01:24:06.312384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:32.805 [2024-12-14 01:24:06.312391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:32.805 [2024-12-14 01:24:06.312397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.805 [2024-12-14 01:24:06.312404] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:32.805 [2024-12-14 01:24:06.312414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:32.805 [2024-12-14 01:24:06.312423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:32.805 [2024-12-14 01:24:06.312431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.805 [2024-12-14 01:24:06.312440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:32.805 [2024-12-14 01:24:06.312446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:32.805 [2024-12-14 01:24:06.312453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:32.805 [2024-12-14 01:24:06.312459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:32.805 [2024-12-14 01:24:06.312466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:32.805 [2024-12-14 01:24:06.312472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:32.805 [2024-12-14 01:24:06.312480] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:32.805 [2024-12-14 01:24:06.312489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:32.805 [2024-12-14 01:24:06.312498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:32.805 [2024-12-14 01:24:06.312504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:32.805 [2024-12-14 01:24:06.312512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:32.805 [2024-12-14 01:24:06.312519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:32.805 [2024-12-14 01:24:06.312526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:32.805 [2024-12-14 01:24:06.312532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:32.805 [2024-12-14 01:24:06.312541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:32.805 [2024-12-14 01:24:06.312547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:32.805 [2024-12-14 01:24:06.312557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:32.805 [2024-12-14 01:24:06.312563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:32.805 [2024-12-14 01:24:06.312570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:32.805 [2024-12-14 01:24:06.312577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:32.805 [2024-12-14 01:24:06.312584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:32.805 [2024-12-14 01:24:06.312590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:32.805 [2024-12-14 01:24:06.312598] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:32.805 [2024-12-14 01:24:06.312607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:32.805 [2024-12-14 01:24:06.312614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:32.805 [2024-12-14 01:24:06.312635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:32.805 [2024-12-14 01:24:06.312644] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:32.805 [2024-12-14 01:24:06.312650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:32.805 [2024-12-14 01:24:06.312657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.805 [2024-12-14 01:24:06.312663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:32.805 [2024-12-14 01:24:06.312671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:24:32.805 [2024-12-14 01:24:06.312681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.805 [2024-12-14 01:24:06.312711] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:32.805 [2024-12-14 01:24:06.312718] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:36.108 [2024-12-14 01:24:09.267358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.108 [2024-12-14 01:24:09.267411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:36.108 [2024-12-14 01:24:09.267426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2954.634 ms 00:24:36.108 [2024-12-14 01:24:09.267438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.108 [2024-12-14 01:24:09.275066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.108 [2024-12-14 01:24:09.275103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:36.108 [2024-12-14 01:24:09.275116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.558 ms 00:24:36.108 [2024-12-14 01:24:09.275123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.108 [2024-12-14 01:24:09.275195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.108 [2024-12-14 01:24:09.275202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:36.108 [2024-12-14 01:24:09.275210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:36.108 [2024-12-14 01:24:09.275216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.108 [2024-12-14 01:24:09.282490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.108 [2024-12-14 01:24:09.282523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:36.108 [2024-12-14 01:24:09.282532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.240 ms 00:24:36.108 [2024-12-14 01:24:09.282540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.108 [2024-12-14 01:24:09.282563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.108 [2024-12-14 01:24:09.282570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:36.108 [2024-12-14 01:24:09.282577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:36.108 [2024-12-14 01:24:09.282586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.108 [2024-12-14 01:24:09.282876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.108 [2024-12-14 01:24:09.282892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:36.108 [2024-12-14 01:24:09.282903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:24:36.108 [2024-12-14 01:24:09.282909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.108 [2024-12-14 01:24:09.282995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.108 [2024-12-14 01:24:09.283002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:36.108 [2024-12-14 01:24:09.283011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:36.108 [2024-12-14 01:24:09.283016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.108 [2024-12-14 01:24:09.287682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.108 [2024-12-14 01:24:09.287708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:36.108 [2024-12-14 01:24:09.287717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.650 ms 00:24:36.108 [2024-12-14 01:24:09.287723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.108 [2024-12-14 01:24:09.311570] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:36.108 [2024-12-14 01:24:09.314352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.108 [2024-12-14 01:24:09.314390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:36.108 [2024-12-14 01:24:09.314403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.565 ms 00:24:36.108 [2024-12-14 01:24:09.314414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.108 [2024-12-14 01:24:09.368114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.108 [2024-12-14 01:24:09.368154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:36.108 [2024-12-14 01:24:09.368165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.665 ms 00:24:36.108 [2024-12-14 01:24:09.368175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.108 [2024-12-14 01:24:09.368319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.108 [2024-12-14 01:24:09.368330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:36.108 [2024-12-14 01:24:09.368337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:24:36.108 [2024-12-14 01:24:09.368344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.108 [2024-12-14 01:24:09.371040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.108 [2024-12-14 01:24:09.371074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:36.108 [2024-12-14 01:24:09.371086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.681 ms 00:24:36.108 [2024-12-14 01:24:09.371094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.108 [2024-12-14 01:24:09.373074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.108 [2024-12-14 01:24:09.373103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:36.108 [2024-12-14 01:24:09.373111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.952 ms 00:24:36.108 [2024-12-14 01:24:09.373118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.108 [2024-12-14 01:24:09.373352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.108 [2024-12-14 01:24:09.373367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:36.108 [2024-12-14 01:24:09.373374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:24:36.108 [2024-12-14 01:24:09.373382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.108 [2024-12-14 01:24:09.398145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.108 [2024-12-14 01:24:09.398178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:36.108 [2024-12-14 01:24:09.398188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.749 ms 00:24:36.108 [2024-12-14 01:24:09.398195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.108 [2024-12-14 01:24:09.401352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.108 [2024-12-14 01:24:09.401383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:36.108 [2024-12-14 01:24:09.401391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.118 ms 00:24:36.108 [2024-12-14 01:24:09.401400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.108 [2024-12-14 01:24:09.403963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.108 [2024-12-14 01:24:09.403991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:36.108 [2024-12-14 01:24:09.403998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.537 ms 00:24:36.108 [2024-12-14 01:24:09.404005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.108 [2024-12-14 01:24:09.406702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.108 [2024-12-14 01:24:09.406735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:36.108 [2024-12-14 01:24:09.406742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.672 ms 00:24:36.108 [2024-12-14 01:24:09.406751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.108 [2024-12-14 01:24:09.406779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.108 [2024-12-14 01:24:09.406789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:36.108 [2024-12-14 01:24:09.406795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:36.108 [2024-12-14 01:24:09.406802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.108 [2024-12-14 01:24:09.406850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.108 [2024-12-14 01:24:09.406858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:36.108 [2024-12-14 01:24:09.406864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:36.108 [2024-12-14 01:24:09.406874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.108 [2024-12-14 01:24:09.407714] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3103.338 ms, result 0 00:24:36.108 { 00:24:36.108 "name": "ftl0", 00:24:36.108 "uuid": "38b9c08e-308c-4ea7-a0bc-535ef4b9d616" 00:24:36.108 } 00:24:36.109 01:24:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:36.109 01:24:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:36.109 01:24:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:36.109 01:24:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:36.109 01:24:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:36.369 /dev/nbd0 00:24:36.369 01:24:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:36.369 01:24:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:36.369 01:24:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:24:36.369 01:24:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:36.369 01:24:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:36.369 01:24:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:36.369 01:24:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:24:36.369 01:24:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:36.369 01:24:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:36.369 01:24:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:36.369 1+0 records in 00:24:36.369 1+0 records out 00:24:36.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026595 s, 15.4 MB/s 00:24:36.369 01:24:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:36.369 01:24:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:24:36.369 01:24:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:36.369 01:24:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:36.369 01:24:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:24:36.369 01:24:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:36.369 [2024-12-14 01:24:09.923178] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:36.369 [2024-12-14 01:24:09.923287] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92720 ] 00:24:36.630 [2024-12-14 01:24:10.065757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.630 [2024-12-14 01:24:10.083891] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.577  [2024-12-14T01:24:12.135Z] Copying: 196/1024 [MB] (196 MBps) [2024-12-14T01:24:13.521Z] Copying: 393/1024 [MB] (196 MBps) [2024-12-14T01:24:14.464Z] Copying: 648/1024 [MB] (255 MBps) [2024-12-14T01:24:14.725Z] Copying: 909/1024 [MB] (260 MBps) [2024-12-14T01:24:14.725Z] Copying: 1024/1024 [MB] (average 230 MBps) 00:24:41.113 00:24:41.374 01:24:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:43.287 01:24:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:43.287 [2024-12-14 01:24:16.799021] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:43.287 [2024-12-14 01:24:16.799117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92796 ] 00:24:43.547 [2024-12-14 01:24:16.935947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.547 [2024-12-14 01:24:16.952368] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.488  [2024-12-14T01:24:19.043Z] Copying: 32/1024 [MB] (32 MBps) [2024-12-14T01:24:20.422Z] Copying: 60/1024 [MB] (27 MBps) [2024-12-14T01:24:21.360Z] Copying: 91/1024 [MB] (30 MBps) [2024-12-14T01:24:22.294Z] Copying: 114/1024 [MB] (23 MBps) [2024-12-14T01:24:23.228Z] Copying: 133/1024 [MB] (18 MBps) [2024-12-14T01:24:24.163Z] Copying: 152/1024 [MB] (19 MBps) [2024-12-14T01:24:25.097Z] Copying: 177/1024 [MB] (24 MBps) [2024-12-14T01:24:26.028Z] Copying: 193/1024 [MB] (16 MBps) [2024-12-14T01:24:27.402Z] Copying: 207/1024 [MB] (14 MBps) [2024-12-14T01:24:28.336Z] Copying: 225/1024 [MB] (17 MBps) [2024-12-14T01:24:29.271Z] Copying: 240/1024 [MB] (15 MBps) [2024-12-14T01:24:30.280Z] Copying: 254/1024 [MB] (14 MBps) [2024-12-14T01:24:31.223Z] Copying: 272/1024 [MB] (17 MBps) [2024-12-14T01:24:32.158Z] Copying: 301/1024 [MB] (29 MBps) [2024-12-14T01:24:33.090Z] Copying: 332/1024 [MB] (31 MBps) [2024-12-14T01:24:34.024Z] Copying: 347/1024 [MB] (14 MBps) [2024-12-14T01:24:35.397Z] Copying: 361/1024 [MB] (14 MBps) [2024-12-14T01:24:36.330Z] Copying: 384/1024 [MB] (22 MBps) [2024-12-14T01:24:37.264Z] Copying: 398/1024 [MB] (14 MBps) [2024-12-14T01:24:38.197Z] Copying: 416/1024 [MB] (18 MBps) [2024-12-14T01:24:39.131Z] Copying: 428/1024 [MB] (11 MBps) [2024-12-14T01:24:40.065Z] Copying: 438/1024 [MB] (10 MBps) [2024-12-14T01:24:40.999Z] Copying: 451/1024 [MB] (13 MBps) [2024-12-14T01:24:42.372Z] Copying: 463/1024 [MB] (11 MBps) [2024-12-14T01:24:43.306Z] Copying: 476/1024 [MB] (13 MBps) [2024-12-14T01:24:44.240Z] Copying: 511/1024 [MB] (34 MBps) [2024-12-14T01:24:45.174Z] Copying: 544/1024 [MB] (32 MBps) [2024-12-14T01:24:46.107Z] Copying: 554/1024 [MB] (10 MBps) [2024-12-14T01:24:47.041Z] Copying: 566/1024 [MB] (11 MBps) [2024-12-14T01:24:48.415Z] Copying: 577/1024 [MB] (11 MBps) [2024-12-14T01:24:49.350Z] Copying: 599/1024 [MB] (21 MBps) [2024-12-14T01:24:50.284Z] Copying: 615/1024 [MB] (15 MBps) [2024-12-14T01:24:51.219Z] Copying: 628/1024 [MB] (13 MBps) [2024-12-14T01:24:52.153Z] Copying: 654/1024 [MB] (25 MBps) [2024-12-14T01:24:53.131Z] Copying: 671/1024 [MB] (17 MBps) [2024-12-14T01:24:54.078Z] Copying: 684/1024 [MB] (13 MBps) [2024-12-14T01:24:55.012Z] Copying: 698/1024 [MB] (14 MBps) [2024-12-14T01:24:56.382Z] Copying: 714/1024 [MB] (15 MBps) [2024-12-14T01:24:57.316Z] Copying: 727/1024 [MB] (13 MBps) [2024-12-14T01:24:58.250Z] Copying: 742/1024 [MB] (14 MBps) [2024-12-14T01:24:59.181Z] Copying: 758/1024 [MB] (15 MBps) [2024-12-14T01:25:00.115Z] Copying: 775/1024 [MB] (16 MBps) [2024-12-14T01:25:01.049Z] Copying: 794/1024 [MB] (19 MBps) [2024-12-14T01:25:02.423Z] Copying: 813/1024 [MB] (18 MBps) [2024-12-14T01:25:03.355Z] Copying: 831/1024 [MB] (17 MBps) [2024-12-14T01:25:04.289Z] Copying: 848/1024 [MB] (17 MBps) [2024-12-14T01:25:05.223Z] Copying: 872/1024 [MB] (24 MBps) [2024-12-14T01:25:06.157Z] Copying: 893/1024 [MB] (20 MBps) [2024-12-14T01:25:07.091Z] Copying: 911/1024 [MB] (18 MBps) [2024-12-14T01:25:08.025Z] Copying: 929/1024 [MB] (17 MBps) [2024-12-14T01:25:09.399Z] Copying: 942/1024 [MB] (12 MBps) [2024-12-14T01:25:10.333Z] Copying: 974/1024 [MB] (31 MBps) [2024-12-14T01:25:11.266Z] Copying: 990/1024 [MB] (16 MBps) [2024-12-14T01:25:11.266Z] Copying: 1016/1024 [MB] (25 MBps) [2024-12-14T01:25:11.526Z] Copying: 1024/1024 [MB] (average 18 MBps) 00:25:37.914 00:25:37.914 01:25:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:37.914 01:25:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:38.172 01:25:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:38.172 [2024-12-14 01:25:11.782288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.172 [2024-12-14 01:25:11.782326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:38.172 [2024-12-14 01:25:11.782337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:38.172 [2024-12-14 01:25:11.782344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.172 [2024-12-14 01:25:11.782365] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:38.172 [2024-12-14 01:25:11.782794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.172 [2024-12-14 01:25:11.782820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:38.172 [2024-12-14 01:25:11.782828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:25:38.172 [2024-12-14 01:25:11.782835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.434 [2024-12-14 01:25:11.784506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.434 [2024-12-14 01:25:11.784535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:38.434 [2024-12-14 01:25:11.784543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.653 ms 00:25:38.434 [2024-12-14 01:25:11.784551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.434 [2024-12-14 01:25:11.797925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.434 [2024-12-14 01:25:11.797955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:38.434 [2024-12-14 01:25:11.797966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.360 ms 00:25:38.434 [2024-12-14 01:25:11.797974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.434 [2024-12-14 01:25:11.802804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.434 [2024-12-14 01:25:11.802829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:38.434 [2024-12-14 01:25:11.802838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.803 ms 00:25:38.434 [2024-12-14 01:25:11.802846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.434 [2024-12-14 01:25:11.803830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.434 [2024-12-14 01:25:11.803862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:38.434 [2024-12-14 01:25:11.803870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.906 ms 00:25:38.434 [2024-12-14 01:25:11.803878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.434 [2024-12-14 01:25:11.807918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.434 [2024-12-14 01:25:11.807950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:38.434 [2024-12-14 01:25:11.807957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.013 ms 00:25:38.434 [2024-12-14 01:25:11.807966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.434 [2024-12-14 01:25:11.808064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.434 [2024-12-14 01:25:11.808073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:38.434 [2024-12-14 01:25:11.808080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:38.434 [2024-12-14 01:25:11.808095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.434 [2024-12-14 01:25:11.809585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.435 [2024-12-14 01:25:11.809614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:38.435 [2024-12-14 01:25:11.809631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.477 ms 00:25:38.435 [2024-12-14 01:25:11.809640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.435 [2024-12-14 01:25:11.810753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.435 [2024-12-14 01:25:11.810785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:38.435 [2024-12-14 01:25:11.810792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 00:25:38.435 [2024-12-14 01:25:11.810799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.435 [2024-12-14 01:25:11.811697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.435 [2024-12-14 01:25:11.811725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:38.435 [2024-12-14 01:25:11.811733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.872 ms 00:25:38.435 [2024-12-14 01:25:11.811739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.435 [2024-12-14 01:25:11.812598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.435 [2024-12-14 01:25:11.812636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:38.435 [2024-12-14 01:25:11.812644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.815 ms 00:25:38.435 [2024-12-14 01:25:11.812652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.435 [2024-12-14 01:25:11.812676] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:38.435 [2024-12-14 01:25:11.812689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.812993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:38.435 [2024-12-14 01:25:11.813208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:38.436 [2024-12-14 01:25:11.813214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:38.436 [2024-12-14 01:25:11.813222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:38.436 [2024-12-14 01:25:11.813227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:38.436 [2024-12-14 01:25:11.813234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:38.436 [2024-12-14 01:25:11.813240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:38.436 [2024-12-14 01:25:11.813248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:38.436 [2024-12-14 01:25:11.813254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:38.436 [2024-12-14 01:25:11.813261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:38.436 [2024-12-14 01:25:11.813267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:38.436 [2024-12-14 01:25:11.813274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:38.436 [2024-12-14 01:25:11.813280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:38.436 [2024-12-14 01:25:11.813287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:38.436 [2024-12-14 01:25:11.813293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:38.436 [2024-12-14 01:25:11.813300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:38.436 [2024-12-14 01:25:11.813306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:38.436 [2024-12-14 01:25:11.813313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:38.436 [2024-12-14 01:25:11.813319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:38.436 [2024-12-14 01:25:11.813326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:38.436 [2024-12-14 01:25:11.813332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:38.436 [2024-12-14 01:25:11.813339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:38.436 [2024-12-14 01:25:11.813345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:38.436 [2024-12-14 01:25:11.813360] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:38.436 [2024-12-14 01:25:11.813365] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 38b9c08e-308c-4ea7-a0bc-535ef4b9d616 00:25:38.436 [2024-12-14 01:25:11.813372] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:38.436 [2024-12-14 01:25:11.813378] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:38.436 [2024-12-14 01:25:11.813385] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:38.436 [2024-12-14 01:25:11.813390] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:38.436 [2024-12-14 01:25:11.813397] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:38.436 [2024-12-14 01:25:11.813403] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:38.436 [2024-12-14 01:25:11.813409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:38.436 [2024-12-14 01:25:11.813414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:38.436 [2024-12-14 01:25:11.813425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:38.436 [2024-12-14 01:25:11.813430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.436 [2024-12-14 01:25:11.813438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:38.436 [2024-12-14 01:25:11.813445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:25:38.436 [2024-12-14 01:25:11.813453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.436 [2024-12-14 01:25:11.814740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.436 [2024-12-14 01:25:11.814767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:38.436 [2024-12-14 01:25:11.814774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.274 ms 00:25:38.436 [2024-12-14 01:25:11.814782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.436 [2024-12-14 01:25:11.814861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.436 [2024-12-14 01:25:11.814871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:38.436 [2024-12-14 01:25:11.814878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:38.436 [2024-12-14 01:25:11.814885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.436 [2024-12-14 01:25:11.819422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.436 [2024-12-14 01:25:11.819450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:38.436 [2024-12-14 01:25:11.819458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.436 [2024-12-14 01:25:11.819470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.436 [2024-12-14 01:25:11.819512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.436 [2024-12-14 01:25:11.819522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:38.436 [2024-12-14 01:25:11.819528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.436 [2024-12-14 01:25:11.819537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.436 [2024-12-14 01:25:11.819581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.436 [2024-12-14 01:25:11.819592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:38.436 [2024-12-14 01:25:11.819598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.436 [2024-12-14 01:25:11.819605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.436 [2024-12-14 01:25:11.819618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.436 [2024-12-14 01:25:11.819639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:38.436 [2024-12-14 01:25:11.819645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.436 [2024-12-14 01:25:11.819657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.436 [2024-12-14 01:25:11.827885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.436 [2024-12-14 01:25:11.827922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:38.436 [2024-12-14 01:25:11.827931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.436 [2024-12-14 01:25:11.827938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.436 [2024-12-14 01:25:11.834713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.436 [2024-12-14 01:25:11.834746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:38.436 [2024-12-14 01:25:11.834757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.436 [2024-12-14 01:25:11.834765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.436 [2024-12-14 01:25:11.834818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.436 [2024-12-14 01:25:11.834830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:38.436 [2024-12-14 01:25:11.834836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.436 [2024-12-14 01:25:11.834844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.436 [2024-12-14 01:25:11.834871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.436 [2024-12-14 01:25:11.834880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:38.436 [2024-12-14 01:25:11.834886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.436 [2024-12-14 01:25:11.834893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.436 [2024-12-14 01:25:11.834948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.436 [2024-12-14 01:25:11.834957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:38.436 [2024-12-14 01:25:11.834963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.436 [2024-12-14 01:25:11.834970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.436 [2024-12-14 01:25:11.834994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.436 [2024-12-14 01:25:11.835003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:38.436 [2024-12-14 01:25:11.835009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.436 [2024-12-14 01:25:11.835016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.436 [2024-12-14 01:25:11.835050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.436 [2024-12-14 01:25:11.835064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:38.436 [2024-12-14 01:25:11.835070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.436 [2024-12-14 01:25:11.835077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.436 [2024-12-14 01:25:11.835113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.436 [2024-12-14 01:25:11.835122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:38.436 [2024-12-14 01:25:11.835128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.436 [2024-12-14 01:25:11.835135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.436 [2024-12-14 01:25:11.835238] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.928 ms, result 0 00:25:38.436 true 00:25:38.436 01:25:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 92583 00:25:38.436 01:25:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid92583 00:25:38.436 01:25:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:38.436 [2024-12-14 01:25:11.924634] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:25:38.436 [2024-12-14 01:25:11.924749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93373 ] 00:25:38.697 [2024-12-14 01:25:12.066965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.697 [2024-12-14 01:25:12.083834] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.639  [2024-12-14T01:25:14.191Z] Copying: 259/1024 [MB] (259 MBps) [2024-12-14T01:25:15.132Z] Copying: 519/1024 [MB] (259 MBps) [2024-12-14T01:25:16.543Z] Copying: 774/1024 [MB] (255 MBps) [2024-12-14T01:25:16.543Z] Copying: 1024/1024 [MB] (average 257 MBps) 00:25:42.931 00:25:42.931 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 92583 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:42.931 01:25:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:42.931 [2024-12-14 01:25:16.305062] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:25:42.931 [2024-12-14 01:25:16.305187] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93416 ] 00:25:42.931 [2024-12-14 01:25:16.442498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.931 [2024-12-14 01:25:16.460843] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.193 [2024-12-14 01:25:16.543427] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:43.193 [2024-12-14 01:25:16.543481] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:43.193 [2024-12-14 01:25:16.605091] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:43.193 [2024-12-14 01:25:16.605428] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:43.193 [2024-12-14 01:25:16.605647] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:43.193 [2024-12-14 01:25:16.776423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.193 [2024-12-14 01:25:16.776469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:43.193 [2024-12-14 01:25:16.776482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:43.193 [2024-12-14 01:25:16.776503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.193 [2024-12-14 01:25:16.776555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.193 [2024-12-14 01:25:16.776565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:43.193 [2024-12-14 01:25:16.776573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:43.193 [2024-12-14 01:25:16.776580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.193 [2024-12-14 01:25:16.776602] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:43.193 [2024-12-14 01:25:16.776859] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:43.193 [2024-12-14 01:25:16.776876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.193 [2024-12-14 01:25:16.776883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:43.193 [2024-12-14 01:25:16.776894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:25:43.193 [2024-12-14 01:25:16.776901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.193 [2024-12-14 01:25:16.777987] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:43.193 [2024-12-14 01:25:16.780593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.193 [2024-12-14 01:25:16.780639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:43.193 [2024-12-14 01:25:16.780657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.607 ms 00:25:43.193 [2024-12-14 01:25:16.780664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.193 [2024-12-14 01:25:16.780717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.193 [2024-12-14 01:25:16.780726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:43.193 [2024-12-14 01:25:16.780738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:43.193 [2024-12-14 01:25:16.780745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.193 [2024-12-14 01:25:16.785847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.193 [2024-12-14 01:25:16.785880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:43.193 [2024-12-14 01:25:16.785890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.051 ms 00:25:43.193 [2024-12-14 01:25:16.785901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.193 [2024-12-14 01:25:16.785982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.193 [2024-12-14 01:25:16.785991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:43.193 [2024-12-14 01:25:16.785999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:25:43.193 [2024-12-14 01:25:16.786009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.193 [2024-12-14 01:25:16.786056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.193 [2024-12-14 01:25:16.786068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:43.193 [2024-12-14 01:25:16.786076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:43.193 [2024-12-14 01:25:16.786089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.193 [2024-12-14 01:25:16.786108] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:43.193 [2024-12-14 01:25:16.787461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.193 [2024-12-14 01:25:16.787489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:43.193 [2024-12-14 01:25:16.787498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.357 ms 00:25:43.193 [2024-12-14 01:25:16.787508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.193 [2024-12-14 01:25:16.787537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.193 [2024-12-14 01:25:16.787547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:43.193 [2024-12-14 01:25:16.787554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:43.193 [2024-12-14 01:25:16.787562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.193 [2024-12-14 01:25:16.787584] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:43.193 [2024-12-14 01:25:16.787603] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:43.193 [2024-12-14 01:25:16.787654] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:43.193 [2024-12-14 01:25:16.787676] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:43.193 [2024-12-14 01:25:16.787779] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:43.193 [2024-12-14 01:25:16.787789] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:43.193 [2024-12-14 01:25:16.787799] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:43.193 [2024-12-14 01:25:16.787809] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:43.193 [2024-12-14 01:25:16.787817] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:43.193 [2024-12-14 01:25:16.787828] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:43.193 [2024-12-14 01:25:16.787836] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:43.193 [2024-12-14 01:25:16.787843] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:43.193 [2024-12-14 01:25:16.787852] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:43.193 [2024-12-14 01:25:16.787860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.193 [2024-12-14 01:25:16.787867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:43.193 [2024-12-14 01:25:16.787877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:25:43.193 [2024-12-14 01:25:16.787884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.193 [2024-12-14 01:25:16.787966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.193 [2024-12-14 01:25:16.787975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:43.193 [2024-12-14 01:25:16.787983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:43.193 [2024-12-14 01:25:16.787990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.193 [2024-12-14 01:25:16.788084] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:43.193 [2024-12-14 01:25:16.788103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:43.193 [2024-12-14 01:25:16.788112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:43.193 [2024-12-14 01:25:16.788121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.193 [2024-12-14 01:25:16.788134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:43.193 [2024-12-14 01:25:16.788142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:43.193 [2024-12-14 01:25:16.788149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:43.193 [2024-12-14 01:25:16.788157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:43.193 [2024-12-14 01:25:16.788165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:43.193 [2024-12-14 01:25:16.788172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:43.193 [2024-12-14 01:25:16.788180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:43.193 [2024-12-14 01:25:16.788188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:43.193 [2024-12-14 01:25:16.788195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:43.193 [2024-12-14 01:25:16.788207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:43.193 [2024-12-14 01:25:16.788215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:43.193 [2024-12-14 01:25:16.788222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.193 [2024-12-14 01:25:16.788229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:43.193 [2024-12-14 01:25:16.788237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:43.193 [2024-12-14 01:25:16.788245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.193 [2024-12-14 01:25:16.788252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:43.193 [2024-12-14 01:25:16.788260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:43.193 [2024-12-14 01:25:16.788267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.193 [2024-12-14 01:25:16.788274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:43.193 [2024-12-14 01:25:16.788281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:43.193 [2024-12-14 01:25:16.788288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.193 [2024-12-14 01:25:16.788296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:43.193 [2024-12-14 01:25:16.788303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:43.193 [2024-12-14 01:25:16.788310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.193 [2024-12-14 01:25:16.788318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:43.193 [2024-12-14 01:25:16.788329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:43.193 [2024-12-14 01:25:16.788337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.193 [2024-12-14 01:25:16.788345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:43.193 [2024-12-14 01:25:16.788352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:43.194 [2024-12-14 01:25:16.788360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:43.194 [2024-12-14 01:25:16.788367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:43.194 [2024-12-14 01:25:16.788374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:43.194 [2024-12-14 01:25:16.788382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:43.194 [2024-12-14 01:25:16.788389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:43.194 [2024-12-14 01:25:16.788400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:43.194 [2024-12-14 01:25:16.788408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.194 [2024-12-14 01:25:16.788415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:43.194 [2024-12-14 01:25:16.788423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:43.194 [2024-12-14 01:25:16.788430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.194 [2024-12-14 01:25:16.788437] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:43.194 [2024-12-14 01:25:16.788446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:43.194 [2024-12-14 01:25:16.788456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:43.194 [2024-12-14 01:25:16.788464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.194 [2024-12-14 01:25:16.788473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:43.194 [2024-12-14 01:25:16.788480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:43.194 [2024-12-14 01:25:16.788488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:43.194 [2024-12-14 01:25:16.788496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:43.194 [2024-12-14 01:25:16.788504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:43.194 [2024-12-14 01:25:16.788511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:43.194 [2024-12-14 01:25:16.788520] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:43.194 [2024-12-14 01:25:16.788534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:43.194 [2024-12-14 01:25:16.788544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:43.194 [2024-12-14 01:25:16.788552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:43.194 [2024-12-14 01:25:16.788559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:43.194 [2024-12-14 01:25:16.788566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:43.194 [2024-12-14 01:25:16.788574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:43.194 [2024-12-14 01:25:16.788585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:43.194 [2024-12-14 01:25:16.788594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:43.194 [2024-12-14 01:25:16.788601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:43.194 [2024-12-14 01:25:16.788608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:43.194 [2024-12-14 01:25:16.788615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:43.194 [2024-12-14 01:25:16.788634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:43.194 [2024-12-14 01:25:16.788641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:43.194 [2024-12-14 01:25:16.788648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:43.194 [2024-12-14 01:25:16.788655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:43.194 [2024-12-14 01:25:16.788662] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:43.194 [2024-12-14 01:25:16.788675] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:43.194 [2024-12-14 01:25:16.788683] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:43.194 [2024-12-14 01:25:16.788690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:43.194 [2024-12-14 01:25:16.788697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:43.194 [2024-12-14 01:25:16.788704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:43.194 [2024-12-14 01:25:16.788712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.194 [2024-12-14 01:25:16.788722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:43.194 [2024-12-14 01:25:16.788734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:25:43.194 [2024-12-14 01:25:16.788741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.194 [2024-12-14 01:25:16.797678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.194 [2024-12-14 01:25:16.797709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:43.194 [2024-12-14 01:25:16.797718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.896 ms 00:25:43.194 [2024-12-14 01:25:16.797728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.194 [2024-12-14 01:25:16.797806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.194 [2024-12-14 01:25:16.797816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:43.194 [2024-12-14 01:25:16.797824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:43.194 [2024-12-14 01:25:16.797831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.454 [2024-12-14 01:25:16.824742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.454 [2024-12-14 01:25:16.824828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:43.454 [2024-12-14 01:25:16.824860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.857 ms 00:25:43.454 [2024-12-14 01:25:16.824880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.454 [2024-12-14 01:25:16.825044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.454 [2024-12-14 01:25:16.825095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:43.454 [2024-12-14 01:25:16.825140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:43.454 [2024-12-14 01:25:16.825160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.454 [2024-12-14 01:25:16.825848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.454 [2024-12-14 01:25:16.825876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:43.454 [2024-12-14 01:25:16.825886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:25:43.454 [2024-12-14 01:25:16.825894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.454 [2024-12-14 01:25:16.826023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.454 [2024-12-14 01:25:16.826034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:43.454 [2024-12-14 01:25:16.826043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:25:43.454 [2024-12-14 01:25:16.826054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.454 [2024-12-14 01:25:16.831444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.454 [2024-12-14 01:25:16.831474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:43.455 [2024-12-14 01:25:16.831483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.371 ms 00:25:43.455 [2024-12-14 01:25:16.831490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.455 [2024-12-14 01:25:16.834433] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:43.455 [2024-12-14 01:25:16.834464] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:43.455 [2024-12-14 01:25:16.834475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.455 [2024-12-14 01:25:16.834486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:43.455 [2024-12-14 01:25:16.834494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.898 ms 00:25:43.455 [2024-12-14 01:25:16.834501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.455 [2024-12-14 01:25:16.849155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.455 [2024-12-14 01:25:16.849186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:43.455 [2024-12-14 01:25:16.849197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.618 ms 00:25:43.455 [2024-12-14 01:25:16.849211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.455 [2024-12-14 01:25:16.851284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.455 [2024-12-14 01:25:16.851312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:43.455 [2024-12-14 01:25:16.851321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.035 ms 00:25:43.455 [2024-12-14 01:25:16.851328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.455 [2024-12-14 01:25:16.853230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.455 [2024-12-14 01:25:16.853256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:43.455 [2024-12-14 01:25:16.853264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.866 ms 00:25:43.455 [2024-12-14 01:25:16.853271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.455 [2024-12-14 01:25:16.853639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.455 [2024-12-14 01:25:16.853655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:43.455 [2024-12-14 01:25:16.853663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:25:43.455 [2024-12-14 01:25:16.853671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.455 [2024-12-14 01:25:16.869970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.455 [2024-12-14 01:25:16.870018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:43.455 [2024-12-14 01:25:16.870038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.281 ms 00:25:43.455 [2024-12-14 01:25:16.870047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.455 [2024-12-14 01:25:16.878329] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:43.455 [2024-12-14 01:25:16.880855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.455 [2024-12-14 01:25:16.880889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:43.455 [2024-12-14 01:25:16.880908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.764 ms 00:25:43.455 [2024-12-14 01:25:16.880916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.455 [2024-12-14 01:25:16.881008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.455 [2024-12-14 01:25:16.881019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:43.455 [2024-12-14 01:25:16.881033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:43.455 [2024-12-14 01:25:16.881041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.455 [2024-12-14 01:25:16.881109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.455 [2024-12-14 01:25:16.881119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:43.455 [2024-12-14 01:25:16.881127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:43.455 [2024-12-14 01:25:16.881134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.455 [2024-12-14 01:25:16.881152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.455 [2024-12-14 01:25:16.881160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:43.455 [2024-12-14 01:25:16.881168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:43.455 [2024-12-14 01:25:16.881182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.455 [2024-12-14 01:25:16.881213] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:43.455 [2024-12-14 01:25:16.881222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.455 [2024-12-14 01:25:16.881230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:43.455 [2024-12-14 01:25:16.881237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:43.455 [2024-12-14 01:25:16.881247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.455 [2024-12-14 01:25:16.885204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.455 [2024-12-14 01:25:16.885235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:43.455 [2024-12-14 01:25:16.885244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.933 ms 00:25:43.455 [2024-12-14 01:25:16.885252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.455 [2024-12-14 01:25:16.885322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.455 [2024-12-14 01:25:16.885331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:43.455 [2024-12-14 01:25:16.885339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:43.455 [2024-12-14 01:25:16.885346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.455 [2024-12-14 01:25:16.886294] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 109.462 ms, result 0 00:25:44.397  [2024-12-14T01:25:18.950Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-14T01:25:20.337Z] Copying: 27/1024 [MB] (13 MBps) [2024-12-14T01:25:20.909Z] Copying: 52/1024 [MB] (25 MBps) [2024-12-14T01:25:22.296Z] Copying: 64/1024 [MB] (12 MBps) [2024-12-14T01:25:23.240Z] Copying: 93/1024 [MB] (28 MBps) [2024-12-14T01:25:24.184Z] Copying: 104/1024 [MB] (10 MBps) [2024-12-14T01:25:25.128Z] Copying: 114/1024 [MB] (10 MBps) [2024-12-14T01:25:26.070Z] Copying: 140/1024 [MB] (25 MBps) [2024-12-14T01:25:27.015Z] Copying: 150/1024 [MB] (10 MBps) [2024-12-14T01:25:27.960Z] Copying: 162/1024 [MB] (11 MBps) [2024-12-14T01:25:28.904Z] Copying: 172/1024 [MB] (10 MBps) [2024-12-14T01:25:30.292Z] Copying: 184/1024 [MB] (11 MBps) [2024-12-14T01:25:31.239Z] Copying: 207/1024 [MB] (23 MBps) [2024-12-14T01:25:32.183Z] Copying: 234/1024 [MB] (26 MBps) [2024-12-14T01:25:33.127Z] Copying: 250/1024 [MB] (16 MBps) [2024-12-14T01:25:34.068Z] Copying: 266/1024 [MB] (15 MBps) [2024-12-14T01:25:35.015Z] Copying: 312/1024 [MB] (46 MBps) [2024-12-14T01:25:35.957Z] Copying: 345/1024 [MB] (32 MBps) [2024-12-14T01:25:36.901Z] Copying: 390/1024 [MB] (45 MBps) [2024-12-14T01:25:38.286Z] Copying: 416/1024 [MB] (25 MBps) [2024-12-14T01:25:38.921Z] Copying: 440/1024 [MB] (23 MBps) [2024-12-14T01:25:40.307Z] Copying: 467/1024 [MB] (27 MBps) [2024-12-14T01:25:41.254Z] Copying: 483/1024 [MB] (15 MBps) [2024-12-14T01:25:42.197Z] Copying: 515/1024 [MB] (31 MBps) [2024-12-14T01:25:43.141Z] Copying: 541/1024 [MB] (26 MBps) [2024-12-14T01:25:44.083Z] Copying: 562/1024 [MB] (21 MBps) [2024-12-14T01:25:45.026Z] Copying: 607/1024 [MB] (44 MBps) [2024-12-14T01:25:45.969Z] Copying: 644/1024 [MB] (37 MBps) [2024-12-14T01:25:46.911Z] Copying: 687/1024 [MB] (42 MBps) [2024-12-14T01:25:48.295Z] Copying: 720/1024 [MB] (33 MBps) [2024-12-14T01:25:49.239Z] Copying: 741/1024 [MB] (21 MBps) [2024-12-14T01:25:50.182Z] Copying: 761/1024 [MB] (19 MBps) [2024-12-14T01:25:51.126Z] Copying: 787/1024 [MB] (25 MBps) [2024-12-14T01:25:52.069Z] Copying: 801/1024 [MB] (13 MBps) [2024-12-14T01:25:53.009Z] Copying: 836/1024 [MB] (35 MBps) [2024-12-14T01:25:53.951Z] Copying: 860/1024 [MB] (23 MBps) [2024-12-14T01:25:55.337Z] Copying: 884/1024 [MB] (24 MBps) [2024-12-14T01:25:55.908Z] Copying: 917/1024 [MB] (33 MBps) [2024-12-14T01:25:57.294Z] Copying: 958/1024 [MB] (41 MBps) [2024-12-14T01:25:58.237Z] Copying: 975/1024 [MB] (16 MBps) [2024-12-14T01:25:59.180Z] Copying: 985/1024 [MB] (10 MBps) [2024-12-14T01:25:59.443Z] Copying: 1014/1024 [MB] (28 MBps) [2024-12-14T01:25:59.443Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-14 01:25:59.227441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.831 [2024-12-14 01:25:59.227476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:25.831 [2024-12-14 01:25:59.227486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:25.831 [2024-12-14 01:25:59.227493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.831 [2024-12-14 01:25:59.227509] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:25.831 [2024-12-14 01:25:59.227899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.831 [2024-12-14 01:25:59.227915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:25.831 [2024-12-14 01:25:59.227922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:26:25.831 [2024-12-14 01:25:59.227934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.831 [2024-12-14 01:25:59.229765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.831 [2024-12-14 01:25:59.229793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:25.831 [2024-12-14 01:25:59.229801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.816 ms 00:26:25.831 [2024-12-14 01:25:59.229807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.831 [2024-12-14 01:25:59.244153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.831 [2024-12-14 01:25:59.244183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:25.831 [2024-12-14 01:25:59.244191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.334 ms 00:26:25.831 [2024-12-14 01:25:59.244197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.831 [2024-12-14 01:25:59.248960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.831 [2024-12-14 01:25:59.248985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:25.831 [2024-12-14 01:25:59.248993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.733 ms 00:26:25.831 [2024-12-14 01:25:59.249000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.831 [2024-12-14 01:25:59.249894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.831 [2024-12-14 01:25:59.249925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:25.831 [2024-12-14 01:25:59.249932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 00:26:25.831 [2024-12-14 01:25:59.249937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.831 [2024-12-14 01:25:59.253047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.831 [2024-12-14 01:25:59.253075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:25.831 [2024-12-14 01:25:59.253082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.086 ms 00:26:25.831 [2024-12-14 01:25:59.253088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.831 [2024-12-14 01:25:59.254019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.831 [2024-12-14 01:25:59.254056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:25.831 [2024-12-14 01:25:59.254066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.900 ms 00:26:25.831 [2024-12-14 01:25:59.254072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.832 [2024-12-14 01:25:59.255606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.832 [2024-12-14 01:25:59.255642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:25.832 [2024-12-14 01:25:59.255650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.522 ms 00:26:25.832 [2024-12-14 01:25:59.255655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.832 [2024-12-14 01:25:59.256965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.832 [2024-12-14 01:25:59.256992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:25.832 [2024-12-14 01:25:59.256999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.287 ms 00:26:25.832 [2024-12-14 01:25:59.257004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.832 [2024-12-14 01:25:59.258163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.832 [2024-12-14 01:25:59.258192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:25.832 [2024-12-14 01:25:59.258199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.136 ms 00:26:25.832 [2024-12-14 01:25:59.258205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.832 [2024-12-14 01:25:59.258988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.832 [2024-12-14 01:25:59.259017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:25.832 [2024-12-14 01:25:59.259024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:26:25.832 [2024-12-14 01:25:59.259029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.832 [2024-12-14 01:25:59.259057] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:25.832 [2024-12-14 01:25:59.259067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 768 / 261120 wr_cnt: 1 state: open 00:26:25.832 [2024-12-14 01:25:59.259075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:25.832 [2024-12-14 01:25:59.259520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:25.833 [2024-12-14 01:25:59.259667] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:25.833 [2024-12-14 01:25:59.259673] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 38b9c08e-308c-4ea7-a0bc-535ef4b9d616 00:26:25.833 [2024-12-14 01:25:59.259680] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 768 00:26:25.833 [2024-12-14 01:25:59.259686] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1728 00:26:25.833 [2024-12-14 01:25:59.259694] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 768 00:26:25.833 [2024-12-14 01:25:59.259701] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 2.2500 00:26:25.833 [2024-12-14 01:25:59.259709] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:25.833 [2024-12-14 01:25:59.259715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:25.833 [2024-12-14 01:25:59.259720] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:25.833 [2024-12-14 01:25:59.259725] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:25.833 [2024-12-14 01:25:59.259730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:25.833 [2024-12-14 01:25:59.259735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.833 [2024-12-14 01:25:59.259741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:25.833 [2024-12-14 01:25:59.259747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:26:25.833 [2024-12-14 01:25:59.259753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.833 [2024-12-14 01:25:59.260995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.833 [2024-12-14 01:25:59.261018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:25.833 [2024-12-14 01:25:59.261025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.219 ms 00:26:25.833 [2024-12-14 01:25:59.261031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.833 [2024-12-14 01:25:59.261103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.833 [2024-12-14 01:25:59.261111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:25.833 [2024-12-14 01:25:59.261118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:25.833 [2024-12-14 01:25:59.261124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.833 [2024-12-14 01:25:59.265198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.833 [2024-12-14 01:25:59.265226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:25.833 [2024-12-14 01:25:59.265234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.833 [2024-12-14 01:25:59.265240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.833 [2024-12-14 01:25:59.265276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.833 [2024-12-14 01:25:59.265282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:25.833 [2024-12-14 01:25:59.265288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.833 [2024-12-14 01:25:59.265293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.833 [2024-12-14 01:25:59.265324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.833 [2024-12-14 01:25:59.265331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:25.833 [2024-12-14 01:25:59.265336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.833 [2024-12-14 01:25:59.265342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.833 [2024-12-14 01:25:59.265353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.833 [2024-12-14 01:25:59.265360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:25.833 [2024-12-14 01:25:59.265365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.833 [2024-12-14 01:25:59.265371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.833 [2024-12-14 01:25:59.272875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.833 [2024-12-14 01:25:59.272908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:25.833 [2024-12-14 01:25:59.272916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.833 [2024-12-14 01:25:59.272927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.833 [2024-12-14 01:25:59.279004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.833 [2024-12-14 01:25:59.279042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:25.833 [2024-12-14 01:25:59.279049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.833 [2024-12-14 01:25:59.279055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.833 [2024-12-14 01:25:59.279093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.833 [2024-12-14 01:25:59.279104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:25.833 [2024-12-14 01:25:59.279110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.833 [2024-12-14 01:25:59.279118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.833 [2024-12-14 01:25:59.279152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.833 [2024-12-14 01:25:59.279159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:25.833 [2024-12-14 01:25:59.279166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.833 [2024-12-14 01:25:59.279172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.833 [2024-12-14 01:25:59.279219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.833 [2024-12-14 01:25:59.279228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:25.833 [2024-12-14 01:25:59.279236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.833 [2024-12-14 01:25:59.279242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.833 [2024-12-14 01:25:59.279264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.833 [2024-12-14 01:25:59.279271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:25.833 [2024-12-14 01:25:59.279277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.833 [2024-12-14 01:25:59.279283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.833 [2024-12-14 01:25:59.279310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.833 [2024-12-14 01:25:59.279316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:25.833 [2024-12-14 01:25:59.279325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.833 [2024-12-14 01:25:59.279336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.833 [2024-12-14 01:25:59.279370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.833 [2024-12-14 01:25:59.279379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:25.833 [2024-12-14 01:25:59.279385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.833 [2024-12-14 01:25:59.279391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.833 [2024-12-14 01:25:59.279483] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.017 ms, result 0 00:26:26.405 00:26:26.405 00:26:26.405 01:26:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:28.990 01:26:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:28.990 [2024-12-14 01:26:02.308777] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:26:28.990 [2024-12-14 01:26:02.309101] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93890 ] 00:26:28.990 [2024-12-14 01:26:02.450903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.990 [2024-12-14 01:26:02.475544] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.990 [2024-12-14 01:26:02.563792] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:28.990 [2024-12-14 01:26:02.563852] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:29.253 [2024-12-14 01:26:02.711470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.253 [2024-12-14 01:26:02.711508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:29.253 [2024-12-14 01:26:02.711523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:29.253 [2024-12-14 01:26:02.711530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.253 [2024-12-14 01:26:02.711563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.253 [2024-12-14 01:26:02.711571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:29.253 [2024-12-14 01:26:02.711578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:26:29.253 [2024-12-14 01:26:02.711588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.253 [2024-12-14 01:26:02.711611] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:29.253 [2024-12-14 01:26:02.711985] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:29.253 [2024-12-14 01:26:02.712009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.253 [2024-12-14 01:26:02.712018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:29.253 [2024-12-14 01:26:02.712028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:26:29.253 [2024-12-14 01:26:02.712034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.253 [2024-12-14 01:26:02.712973] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:29.253 [2024-12-14 01:26:02.714958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.253 [2024-12-14 01:26:02.714986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:29.253 [2024-12-14 01:26:02.714994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.986 ms 00:26:29.253 [2024-12-14 01:26:02.715006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.253 [2024-12-14 01:26:02.715050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.253 [2024-12-14 01:26:02.715058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:29.253 [2024-12-14 01:26:02.715067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:29.253 [2024-12-14 01:26:02.715075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.253 [2024-12-14 01:26:02.719417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.253 [2024-12-14 01:26:02.719443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:29.253 [2024-12-14 01:26:02.719458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.307 ms 00:26:29.253 [2024-12-14 01:26:02.719463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.253 [2024-12-14 01:26:02.719527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.253 [2024-12-14 01:26:02.719535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:29.253 [2024-12-14 01:26:02.719541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:29.253 [2024-12-14 01:26:02.719547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.253 [2024-12-14 01:26:02.719588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.253 [2024-12-14 01:26:02.719595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:29.253 [2024-12-14 01:26:02.719601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:29.253 [2024-12-14 01:26:02.719610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.253 [2024-12-14 01:26:02.719642] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:29.253 [2024-12-14 01:26:02.720787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.253 [2024-12-14 01:26:02.720812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:29.253 [2024-12-14 01:26:02.720819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.150 ms 00:26:29.253 [2024-12-14 01:26:02.720825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.253 [2024-12-14 01:26:02.720849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.253 [2024-12-14 01:26:02.720856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:29.253 [2024-12-14 01:26:02.720861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:29.253 [2024-12-14 01:26:02.720869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.253 [2024-12-14 01:26:02.720883] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:29.253 [2024-12-14 01:26:02.720898] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:29.253 [2024-12-14 01:26:02.720929] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:29.253 [2024-12-14 01:26:02.720940] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:29.253 [2024-12-14 01:26:02.721017] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:29.253 [2024-12-14 01:26:02.721025] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:29.253 [2024-12-14 01:26:02.721035] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:29.253 [2024-12-14 01:26:02.721043] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:29.253 [2024-12-14 01:26:02.721049] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:29.253 [2024-12-14 01:26:02.721055] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:29.253 [2024-12-14 01:26:02.721061] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:29.253 [2024-12-14 01:26:02.721066] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:29.253 [2024-12-14 01:26:02.721072] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:29.253 [2024-12-14 01:26:02.721077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.253 [2024-12-14 01:26:02.721082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:29.253 [2024-12-14 01:26:02.721088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:26:29.253 [2024-12-14 01:26:02.721093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.253 [2024-12-14 01:26:02.721161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.253 [2024-12-14 01:26:02.721168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:29.253 [2024-12-14 01:26:02.721173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:29.253 [2024-12-14 01:26:02.721180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.253 [2024-12-14 01:26:02.721254] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:29.253 [2024-12-14 01:26:02.721262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:29.253 [2024-12-14 01:26:02.721268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:29.253 [2024-12-14 01:26:02.721274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:29.253 [2024-12-14 01:26:02.721279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:29.253 [2024-12-14 01:26:02.721284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:29.253 [2024-12-14 01:26:02.721289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:29.253 [2024-12-14 01:26:02.721294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:29.253 [2024-12-14 01:26:02.721299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:29.253 [2024-12-14 01:26:02.721305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:29.253 [2024-12-14 01:26:02.721309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:29.253 [2024-12-14 01:26:02.721314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:29.253 [2024-12-14 01:26:02.721323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:29.253 [2024-12-14 01:26:02.721330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:29.253 [2024-12-14 01:26:02.721336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:29.253 [2024-12-14 01:26:02.721340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:29.253 [2024-12-14 01:26:02.721345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:29.253 [2024-12-14 01:26:02.721350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:29.253 [2024-12-14 01:26:02.721355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:29.254 [2024-12-14 01:26:02.721360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:29.254 [2024-12-14 01:26:02.721364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:29.254 [2024-12-14 01:26:02.721369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:29.254 [2024-12-14 01:26:02.721374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:29.254 [2024-12-14 01:26:02.721379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:29.254 [2024-12-14 01:26:02.721384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:29.254 [2024-12-14 01:26:02.721389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:29.254 [2024-12-14 01:26:02.721394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:29.254 [2024-12-14 01:26:02.721398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:29.254 [2024-12-14 01:26:02.721407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:29.254 [2024-12-14 01:26:02.721412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:29.254 [2024-12-14 01:26:02.721417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:29.254 [2024-12-14 01:26:02.721422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:29.254 [2024-12-14 01:26:02.721426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:29.254 [2024-12-14 01:26:02.721431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:29.254 [2024-12-14 01:26:02.721437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:29.254 [2024-12-14 01:26:02.721442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:29.254 [2024-12-14 01:26:02.721448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:29.254 [2024-12-14 01:26:02.721453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:29.254 [2024-12-14 01:26:02.721458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:29.254 [2024-12-14 01:26:02.721464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:29.254 [2024-12-14 01:26:02.721469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:29.254 [2024-12-14 01:26:02.721475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:29.254 [2024-12-14 01:26:02.721481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:29.254 [2024-12-14 01:26:02.721486] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:29.254 [2024-12-14 01:26:02.721499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:29.254 [2024-12-14 01:26:02.721507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:29.254 [2024-12-14 01:26:02.721513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:29.254 [2024-12-14 01:26:02.721519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:29.254 [2024-12-14 01:26:02.721525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:29.254 [2024-12-14 01:26:02.721530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:29.254 [2024-12-14 01:26:02.721536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:29.254 [2024-12-14 01:26:02.721541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:29.254 [2024-12-14 01:26:02.721554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:29.254 [2024-12-14 01:26:02.721561] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:29.254 [2024-12-14 01:26:02.721571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:29.254 [2024-12-14 01:26:02.721578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:29.254 [2024-12-14 01:26:02.721584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:29.254 [2024-12-14 01:26:02.721590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:29.254 [2024-12-14 01:26:02.721596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:29.254 [2024-12-14 01:26:02.721603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:29.254 [2024-12-14 01:26:02.721611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:29.254 [2024-12-14 01:26:02.721633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:29.254 [2024-12-14 01:26:02.721639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:29.254 [2024-12-14 01:26:02.721645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:29.254 [2024-12-14 01:26:02.721655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:29.254 [2024-12-14 01:26:02.721661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:29.254 [2024-12-14 01:26:02.721667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:29.254 [2024-12-14 01:26:02.721673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:29.254 [2024-12-14 01:26:02.721680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:29.254 [2024-12-14 01:26:02.721685] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:29.254 [2024-12-14 01:26:02.721692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:29.254 [2024-12-14 01:26:02.721699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:29.254 [2024-12-14 01:26:02.721705] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:29.254 [2024-12-14 01:26:02.721711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:29.254 [2024-12-14 01:26:02.721717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:29.254 [2024-12-14 01:26:02.721723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.254 [2024-12-14 01:26:02.721732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:29.254 [2024-12-14 01:26:02.721740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:26:29.254 [2024-12-14 01:26:02.721751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.254 [2024-12-14 01:26:02.729517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.254 [2024-12-14 01:26:02.729543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:29.254 [2024-12-14 01:26:02.729572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.735 ms 00:26:29.254 [2024-12-14 01:26:02.729577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.254 [2024-12-14 01:26:02.729648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.254 [2024-12-14 01:26:02.729654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:29.254 [2024-12-14 01:26:02.729660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:29.254 [2024-12-14 01:26:02.729666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.254 [2024-12-14 01:26:02.749568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.254 [2024-12-14 01:26:02.749643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:29.254 [2024-12-14 01:26:02.749661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.861 ms 00:26:29.254 [2024-12-14 01:26:02.749673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.254 [2024-12-14 01:26:02.749727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.254 [2024-12-14 01:26:02.749751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:29.254 [2024-12-14 01:26:02.749763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:29.254 [2024-12-14 01:26:02.749774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.254 [2024-12-14 01:26:02.750169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.254 [2024-12-14 01:26:02.750202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:29.254 [2024-12-14 01:26:02.750216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:26:29.254 [2024-12-14 01:26:02.750228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.254 [2024-12-14 01:26:02.750403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.254 [2024-12-14 01:26:02.750425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:29.254 [2024-12-14 01:26:02.750439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:26:29.254 [2024-12-14 01:26:02.750451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.254 [2024-12-14 01:26:02.756052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.254 [2024-12-14 01:26:02.756084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:29.254 [2024-12-14 01:26:02.756094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.570 ms 00:26:29.254 [2024-12-14 01:26:02.756102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.254 [2024-12-14 01:26:02.758298] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:26:29.254 [2024-12-14 01:26:02.758332] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:29.254 [2024-12-14 01:26:02.758349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.254 [2024-12-14 01:26:02.758356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:29.254 [2024-12-14 01:26:02.758364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.167 ms 00:26:29.254 [2024-12-14 01:26:02.758371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.254 [2024-12-14 01:26:02.771019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.254 [2024-12-14 01:26:02.771127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:29.254 [2024-12-14 01:26:02.771140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.611 ms 00:26:29.254 [2024-12-14 01:26:02.771155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.254 [2024-12-14 01:26:02.772743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.254 [2024-12-14 01:26:02.772768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:29.254 [2024-12-14 01:26:02.772775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.567 ms 00:26:29.255 [2024-12-14 01:26:02.772780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.255 [2024-12-14 01:26:02.774001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.255 [2024-12-14 01:26:02.774026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:29.255 [2024-12-14 01:26:02.774033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.195 ms 00:26:29.255 [2024-12-14 01:26:02.774038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.255 [2024-12-14 01:26:02.774271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.255 [2024-12-14 01:26:02.774281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:29.255 [2024-12-14 01:26:02.774288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:26:29.255 [2024-12-14 01:26:02.774293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.255 [2024-12-14 01:26:02.788208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.255 [2024-12-14 01:26:02.788246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:29.255 [2024-12-14 01:26:02.788254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.900 ms 00:26:29.255 [2024-12-14 01:26:02.788261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.255 [2024-12-14 01:26:02.793919] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:29.255 [2024-12-14 01:26:02.795859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.255 [2024-12-14 01:26:02.795886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:29.255 [2024-12-14 01:26:02.795894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.567 ms 00:26:29.255 [2024-12-14 01:26:02.795900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.255 [2024-12-14 01:26:02.795944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.255 [2024-12-14 01:26:02.795952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:29.255 [2024-12-14 01:26:02.795966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:29.255 [2024-12-14 01:26:02.795977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.255 [2024-12-14 01:26:02.796448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.255 [2024-12-14 01:26:02.796478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:29.255 [2024-12-14 01:26:02.796487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:26:29.255 [2024-12-14 01:26:02.796493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.255 [2024-12-14 01:26:02.796509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.255 [2024-12-14 01:26:02.796518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:29.255 [2024-12-14 01:26:02.796526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:29.255 [2024-12-14 01:26:02.796532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.255 [2024-12-14 01:26:02.796557] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:29.255 [2024-12-14 01:26:02.796565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.255 [2024-12-14 01:26:02.796571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:29.255 [2024-12-14 01:26:02.796580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:29.255 [2024-12-14 01:26:02.796587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.255 [2024-12-14 01:26:02.799194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.255 [2024-12-14 01:26:02.799222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:29.255 [2024-12-14 01:26:02.799230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.594 ms 00:26:29.255 [2024-12-14 01:26:02.799237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.255 [2024-12-14 01:26:02.799292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.255 [2024-12-14 01:26:02.799300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:29.255 [2024-12-14 01:26:02.799306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:29.255 [2024-12-14 01:26:02.799312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.255 [2024-12-14 01:26:02.800506] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 88.714 ms, result 0 00:26:30.642  [2024-12-14T01:26:05.197Z] Copying: 992/1048576 [kB] (992 kBps) [2024-12-14T01:26:06.139Z] Copying: 2088/1048576 [kB] (1096 kBps) [2024-12-14T01:26:07.082Z] Copying: 8408/1048576 [kB] (6320 kBps) [2024-12-14T01:26:08.027Z] Copying: 41/1024 [MB] (33 MBps) [2024-12-14T01:26:08.967Z] Copying: 72/1024 [MB] (31 MBps) [2024-12-14T01:26:10.352Z] Copying: 109/1024 [MB] (36 MBps) [2024-12-14T01:26:11.295Z] Copying: 142/1024 [MB] (33 MBps) [2024-12-14T01:26:12.238Z] Copying: 175/1024 [MB] (33 MBps) [2024-12-14T01:26:13.180Z] Copying: 206/1024 [MB] (30 MBps) [2024-12-14T01:26:14.124Z] Copying: 241/1024 [MB] (35 MBps) [2024-12-14T01:26:15.067Z] Copying: 272/1024 [MB] (30 MBps) [2024-12-14T01:26:16.010Z] Copying: 308/1024 [MB] (36 MBps) [2024-12-14T01:26:16.954Z] Copying: 337/1024 [MB] (28 MBps) [2024-12-14T01:26:18.340Z] Copying: 368/1024 [MB] (31 MBps) [2024-12-14T01:26:19.284Z] Copying: 403/1024 [MB] (34 MBps) [2024-12-14T01:26:20.227Z] Copying: 434/1024 [MB] (30 MBps) [2024-12-14T01:26:21.170Z] Copying: 463/1024 [MB] (29 MBps) [2024-12-14T01:26:22.114Z] Copying: 500/1024 [MB] (36 MBps) [2024-12-14T01:26:23.058Z] Copying: 521/1024 [MB] (21 MBps) [2024-12-14T01:26:24.002Z] Copying: 549/1024 [MB] (27 MBps) [2024-12-14T01:26:24.943Z] Copying: 576/1024 [MB] (26 MBps) [2024-12-14T01:26:25.950Z] Copying: 604/1024 [MB] (27 MBps) [2024-12-14T01:26:27.339Z] Copying: 635/1024 [MB] (30 MBps) [2024-12-14T01:26:28.280Z] Copying: 658/1024 [MB] (23 MBps) [2024-12-14T01:26:29.221Z] Copying: 675/1024 [MB] (16 MBps) [2024-12-14T01:26:30.166Z] Copying: 705/1024 [MB] (30 MBps) [2024-12-14T01:26:31.110Z] Copying: 722/1024 [MB] (16 MBps) [2024-12-14T01:26:32.054Z] Copying: 738/1024 [MB] (16 MBps) [2024-12-14T01:26:32.998Z] Copying: 759/1024 [MB] (21 MBps) [2024-12-14T01:26:33.941Z] Copying: 789/1024 [MB] (29 MBps) [2024-12-14T01:26:35.327Z] Copying: 812/1024 [MB] (23 MBps) [2024-12-14T01:26:36.268Z] Copying: 837/1024 [MB] (24 MBps) [2024-12-14T01:26:37.213Z] Copying: 869/1024 [MB] (32 MBps) [2024-12-14T01:26:38.158Z] Copying: 890/1024 [MB] (21 MBps) [2024-12-14T01:26:39.102Z] Copying: 908/1024 [MB] (17 MBps) [2024-12-14T01:26:40.046Z] Copying: 932/1024 [MB] (24 MBps) [2024-12-14T01:26:40.992Z] Copying: 948/1024 [MB] (15 MBps) [2024-12-14T01:26:42.378Z] Copying: 963/1024 [MB] (15 MBps) [2024-12-14T01:26:42.956Z] Copying: 982/1024 [MB] (19 MBps) [2024-12-14T01:26:44.340Z] Copying: 998/1024 [MB] (16 MBps) [2024-12-14T01:26:44.340Z] Copying: 1017/1024 [MB] (18 MBps) [2024-12-14T01:26:44.340Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-14 01:26:44.308927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.728 [2024-12-14 01:26:44.309016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:10.728 [2024-12-14 01:26:44.309035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:10.728 [2024-12-14 01:26:44.309047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.728 [2024-12-14 01:26:44.309078] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:10.728 [2024-12-14 01:26:44.310087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.728 [2024-12-14 01:26:44.310121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:10.728 [2024-12-14 01:26:44.310136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.978 ms 00:27:10.728 [2024-12-14 01:26:44.310148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.728 [2024-12-14 01:26:44.310447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.728 [2024-12-14 01:26:44.310552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:10.728 [2024-12-14 01:26:44.310567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:27:10.728 [2024-12-14 01:26:44.310579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.728 [2024-12-14 01:26:44.329446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.728 [2024-12-14 01:26:44.329522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:10.728 [2024-12-14 01:26:44.329546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.844 ms 00:27:10.728 [2024-12-14 01:26:44.329555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.728 [2024-12-14 01:26:44.335713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.728 [2024-12-14 01:26:44.335756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:10.728 [2024-12-14 01:26:44.335768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.110 ms 00:27:10.728 [2024-12-14 01:26:44.335778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.988 [2024-12-14 01:26:44.338824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.988 [2024-12-14 01:26:44.338879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:10.988 [2024-12-14 01:26:44.338891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.968 ms 00:27:10.988 [2024-12-14 01:26:44.338899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.988 [2024-12-14 01:26:44.344343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.988 [2024-12-14 01:26:44.344408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:10.988 [2024-12-14 01:26:44.344419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.396 ms 00:27:10.988 [2024-12-14 01:26:44.344428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.988 [2024-12-14 01:26:44.348617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.988 [2024-12-14 01:26:44.348678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:10.988 [2024-12-14 01:26:44.348690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.139 ms 00:27:10.988 [2024-12-14 01:26:44.348698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.988 [2024-12-14 01:26:44.351956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.988 [2024-12-14 01:26:44.352160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:10.988 [2024-12-14 01:26:44.352179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.241 ms 00:27:10.988 [2024-12-14 01:26:44.352188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.988 [2024-12-14 01:26:44.354453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.988 [2024-12-14 01:26:44.354503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:10.988 [2024-12-14 01:26:44.354513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.225 ms 00:27:10.988 [2024-12-14 01:26:44.354521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.988 [2024-12-14 01:26:44.356507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.988 [2024-12-14 01:26:44.356555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:10.988 [2024-12-14 01:26:44.356565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.944 ms 00:27:10.988 [2024-12-14 01:26:44.356572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.988 [2024-12-14 01:26:44.358831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.988 [2024-12-14 01:26:44.359007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:10.988 [2024-12-14 01:26:44.359025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.169 ms 00:27:10.988 [2024-12-14 01:26:44.359032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.988 [2024-12-14 01:26:44.359160] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:10.988 [2024-12-14 01:26:44.359196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:10.988 [2024-12-14 01:26:44.359208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:10.988 [2024-12-14 01:26:44.359217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:10.988 [2024-12-14 01:26:44.359840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.359848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.359856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.359864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.359871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.359878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.359886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.359893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.359900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.359908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.359916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.359923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.359931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.359938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.359946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.359953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.359961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.359968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.359976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.359984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.359991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.359999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.360010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.360018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.360026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:10.989 [2024-12-14 01:26:44.360042] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:10.989 [2024-12-14 01:26:44.360054] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 38b9c08e-308c-4ea7-a0bc-535ef4b9d616 00:27:10.989 [2024-12-14 01:26:44.360070] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:10.989 [2024-12-14 01:26:44.360084] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 263872 00:27:10.989 [2024-12-14 01:26:44.360092] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 261888 00:27:10.989 [2024-12-14 01:26:44.360101] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:27:10.989 [2024-12-14 01:26:44.360108] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:10.989 [2024-12-14 01:26:44.360116] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:10.989 [2024-12-14 01:26:44.360124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:10.989 [2024-12-14 01:26:44.360130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:10.989 [2024-12-14 01:26:44.360137] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:10.989 [2024-12-14 01:26:44.360144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.989 [2024-12-14 01:26:44.360152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:10.989 [2024-12-14 01:26:44.360162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:27:10.989 [2024-12-14 01:26:44.360169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.989 [2024-12-14 01:26:44.362556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.989 [2024-12-14 01:26:44.362591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:10.989 [2024-12-14 01:26:44.362602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.366 ms 00:27:10.989 [2024-12-14 01:26:44.362611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.989 [2024-12-14 01:26:44.362756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.989 [2024-12-14 01:26:44.362767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:10.989 [2024-12-14 01:26:44.362780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:27:10.989 [2024-12-14 01:26:44.362787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.989 [2024-12-14 01:26:44.370243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.989 [2024-12-14 01:26:44.370444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:10.989 [2024-12-14 01:26:44.370464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.989 [2024-12-14 01:26:44.370483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.989 [2024-12-14 01:26:44.370542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.989 [2024-12-14 01:26:44.370552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:10.989 [2024-12-14 01:26:44.370565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.989 [2024-12-14 01:26:44.370573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.989 [2024-12-14 01:26:44.370660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.989 [2024-12-14 01:26:44.370671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:10.989 [2024-12-14 01:26:44.370679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.989 [2024-12-14 01:26:44.370687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.989 [2024-12-14 01:26:44.370703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.989 [2024-12-14 01:26:44.370712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:10.989 [2024-12-14 01:26:44.370721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.989 [2024-12-14 01:26:44.370733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.989 [2024-12-14 01:26:44.383757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.989 [2024-12-14 01:26:44.383805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:10.989 [2024-12-14 01:26:44.383817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.989 [2024-12-14 01:26:44.383825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.989 [2024-12-14 01:26:44.393887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.989 [2024-12-14 01:26:44.393939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:10.989 [2024-12-14 01:26:44.393963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.989 [2024-12-14 01:26:44.393971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.989 [2024-12-14 01:26:44.394017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.989 [2024-12-14 01:26:44.394027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:10.989 [2024-12-14 01:26:44.394035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.989 [2024-12-14 01:26:44.394042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.989 [2024-12-14 01:26:44.394076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.989 [2024-12-14 01:26:44.394085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:10.989 [2024-12-14 01:26:44.394094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.989 [2024-12-14 01:26:44.394102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.989 [2024-12-14 01:26:44.394171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.989 [2024-12-14 01:26:44.394181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:10.989 [2024-12-14 01:26:44.394189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.989 [2024-12-14 01:26:44.394201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.989 [2024-12-14 01:26:44.394230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.989 [2024-12-14 01:26:44.394239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:10.989 [2024-12-14 01:26:44.394247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.989 [2024-12-14 01:26:44.394255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.989 [2024-12-14 01:26:44.394325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.989 [2024-12-14 01:26:44.394334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:10.989 [2024-12-14 01:26:44.394342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.989 [2024-12-14 01:26:44.394350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.989 [2024-12-14 01:26:44.394392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.989 [2024-12-14 01:26:44.394402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:10.989 [2024-12-14 01:26:44.394410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.989 [2024-12-14 01:26:44.394419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.989 [2024-12-14 01:26:44.394553] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 85.604 ms, result 0 00:27:10.989 00:27:10.989 00:27:10.989 01:26:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:13.533 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:13.533 01:26:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:13.533 [2024-12-14 01:26:46.907798] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:27:13.533 [2024-12-14 01:26:46.907962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94342 ] 00:27:13.533 [2024-12-14 01:26:47.053779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.533 [2024-12-14 01:26:47.082042] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.795 [2024-12-14 01:26:47.204382] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:13.795 [2024-12-14 01:26:47.204471] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:13.795 [2024-12-14 01:26:47.368265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.795 [2024-12-14 01:26:47.368325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:13.795 [2024-12-14 01:26:47.368341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:13.795 [2024-12-14 01:26:47.368350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.795 [2024-12-14 01:26:47.368413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.795 [2024-12-14 01:26:47.368425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:13.795 [2024-12-14 01:26:47.368434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:13.795 [2024-12-14 01:26:47.368451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.795 [2024-12-14 01:26:47.368483] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:13.795 [2024-12-14 01:26:47.368799] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:13.795 [2024-12-14 01:26:47.368820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.795 [2024-12-14 01:26:47.368828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:13.795 [2024-12-14 01:26:47.368841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:27:13.795 [2024-12-14 01:26:47.368849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.795 [2024-12-14 01:26:47.370573] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:13.796 [2024-12-14 01:26:47.374279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.796 [2024-12-14 01:26:47.374337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:13.796 [2024-12-14 01:26:47.374349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.708 ms 00:27:13.796 [2024-12-14 01:26:47.374368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.796 [2024-12-14 01:26:47.374456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.796 [2024-12-14 01:26:47.374469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:13.796 [2024-12-14 01:26:47.374483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:13.796 [2024-12-14 01:26:47.374494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.796 [2024-12-14 01:26:47.382743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.796 [2024-12-14 01:26:47.382788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:13.796 [2024-12-14 01:26:47.382808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.204 ms 00:27:13.796 [2024-12-14 01:26:47.382816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.796 [2024-12-14 01:26:47.382916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.796 [2024-12-14 01:26:47.382926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:13.796 [2024-12-14 01:26:47.382939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:27:13.796 [2024-12-14 01:26:47.382947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.796 [2024-12-14 01:26:47.383008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.796 [2024-12-14 01:26:47.383018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:13.796 [2024-12-14 01:26:47.383026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:13.796 [2024-12-14 01:26:47.383038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.796 [2024-12-14 01:26:47.383060] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:13.796 [2024-12-14 01:26:47.385069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.796 [2024-12-14 01:26:47.385103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:13.796 [2024-12-14 01:26:47.385113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.014 ms 00:27:13.796 [2024-12-14 01:26:47.385121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.796 [2024-12-14 01:26:47.385159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.796 [2024-12-14 01:26:47.385168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:13.796 [2024-12-14 01:26:47.385182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:13.796 [2024-12-14 01:26:47.385198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.796 [2024-12-14 01:26:47.385221] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:13.796 [2024-12-14 01:26:47.385245] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:13.796 [2024-12-14 01:26:47.385287] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:13.796 [2024-12-14 01:26:47.385304] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:13.796 [2024-12-14 01:26:47.385410] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:13.796 [2024-12-14 01:26:47.385421] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:13.796 [2024-12-14 01:26:47.385436] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:13.796 [2024-12-14 01:26:47.385446] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:13.796 [2024-12-14 01:26:47.385456] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:13.796 [2024-12-14 01:26:47.385464] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:13.796 [2024-12-14 01:26:47.385477] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:13.796 [2024-12-14 01:26:47.385485] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:13.796 [2024-12-14 01:26:47.385496] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:13.796 [2024-12-14 01:26:47.385509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.796 [2024-12-14 01:26:47.385516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:13.796 [2024-12-14 01:26:47.385527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:27:13.796 [2024-12-14 01:26:47.385534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.796 [2024-12-14 01:26:47.385657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.796 [2024-12-14 01:26:47.385667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:13.796 [2024-12-14 01:26:47.385675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:27:13.796 [2024-12-14 01:26:47.385682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.796 [2024-12-14 01:26:47.385781] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:13.796 [2024-12-14 01:26:47.385793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:13.796 [2024-12-14 01:26:47.385802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:13.796 [2024-12-14 01:26:47.385811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:13.796 [2024-12-14 01:26:47.385820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:13.796 [2024-12-14 01:26:47.385829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:13.796 [2024-12-14 01:26:47.385836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:13.796 [2024-12-14 01:26:47.385846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:13.796 [2024-12-14 01:26:47.385854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:13.796 [2024-12-14 01:26:47.385865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:13.796 [2024-12-14 01:26:47.385874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:13.796 [2024-12-14 01:26:47.385883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:13.796 [2024-12-14 01:26:47.385890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:13.796 [2024-12-14 01:26:47.385898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:13.796 [2024-12-14 01:26:47.385906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:13.796 [2024-12-14 01:26:47.385914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:13.796 [2024-12-14 01:26:47.385921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:13.796 [2024-12-14 01:26:47.385929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:13.796 [2024-12-14 01:26:47.385937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:13.796 [2024-12-14 01:26:47.385948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:13.796 [2024-12-14 01:26:47.385956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:13.796 [2024-12-14 01:26:47.385969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:13.796 [2024-12-14 01:26:47.385977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:13.796 [2024-12-14 01:26:47.385985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:13.796 [2024-12-14 01:26:47.385994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:13.796 [2024-12-14 01:26:47.386007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:13.796 [2024-12-14 01:26:47.386015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:13.796 [2024-12-14 01:26:47.386022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:13.796 [2024-12-14 01:26:47.386030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:13.796 [2024-12-14 01:26:47.386038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:13.796 [2024-12-14 01:26:47.386046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:13.796 [2024-12-14 01:26:47.386054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:13.796 [2024-12-14 01:26:47.386062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:13.796 [2024-12-14 01:26:47.386070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:13.796 [2024-12-14 01:26:47.386079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:13.796 [2024-12-14 01:26:47.386086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:13.796 [2024-12-14 01:26:47.386097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:13.796 [2024-12-14 01:26:47.386105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:13.796 [2024-12-14 01:26:47.386113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:13.796 [2024-12-14 01:26:47.386120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:13.796 [2024-12-14 01:26:47.386127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:13.796 [2024-12-14 01:26:47.386137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:13.796 [2024-12-14 01:26:47.386145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:13.796 [2024-12-14 01:26:47.386151] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:13.796 [2024-12-14 01:26:47.386162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:13.796 [2024-12-14 01:26:47.386169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:13.796 [2024-12-14 01:26:47.386177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:13.796 [2024-12-14 01:26:47.386185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:13.796 [2024-12-14 01:26:47.386192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:13.796 [2024-12-14 01:26:47.386199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:13.796 [2024-12-14 01:26:47.386205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:13.796 [2024-12-14 01:26:47.386212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:13.796 [2024-12-14 01:26:47.386220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:13.797 [2024-12-14 01:26:47.386228] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:13.797 [2024-12-14 01:26:47.386238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:13.797 [2024-12-14 01:26:47.386247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:13.797 [2024-12-14 01:26:47.386255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:13.797 [2024-12-14 01:26:47.386265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:13.797 [2024-12-14 01:26:47.386272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:13.797 [2024-12-14 01:26:47.386279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:13.797 [2024-12-14 01:26:47.386286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:13.797 [2024-12-14 01:26:47.386293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:13.797 [2024-12-14 01:26:47.386301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:13.797 [2024-12-14 01:26:47.386308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:13.797 [2024-12-14 01:26:47.386322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:13.797 [2024-12-14 01:26:47.386329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:13.797 [2024-12-14 01:26:47.386336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:13.797 [2024-12-14 01:26:47.386343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:13.797 [2024-12-14 01:26:47.386352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:13.797 [2024-12-14 01:26:47.386360] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:13.797 [2024-12-14 01:26:47.386368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:13.797 [2024-12-14 01:26:47.386380] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:13.797 [2024-12-14 01:26:47.386388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:13.797 [2024-12-14 01:26:47.386397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:13.797 [2024-12-14 01:26:47.386404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:13.797 [2024-12-14 01:26:47.386412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.797 [2024-12-14 01:26:47.386420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:13.797 [2024-12-14 01:26:47.386427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:27:13.797 [2024-12-14 01:26:47.386437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.797 [2024-12-14 01:26:47.399572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.797 [2024-12-14 01:26:47.399652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:13.797 [2024-12-14 01:26:47.399665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.089 ms 00:27:13.797 [2024-12-14 01:26:47.399673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.797 [2024-12-14 01:26:47.399763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.797 [2024-12-14 01:26:47.399773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:13.797 [2024-12-14 01:26:47.399781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:27:13.797 [2024-12-14 01:26:47.399789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.059 [2024-12-14 01:26:47.427758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.059 [2024-12-14 01:26:47.428107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:14.059 [2024-12-14 01:26:47.428153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.907 ms 00:27:14.059 [2024-12-14 01:26:47.428176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.059 [2024-12-14 01:26:47.428277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.059 [2024-12-14 01:26:47.428303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:14.059 [2024-12-14 01:26:47.428339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:14.059 [2024-12-14 01:26:47.428369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.059 [2024-12-14 01:26:47.429136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.059 [2024-12-14 01:26:47.429212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:14.059 [2024-12-14 01:26:47.429255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:27:14.059 [2024-12-14 01:26:47.429275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.059 [2024-12-14 01:26:47.429693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.059 [2024-12-14 01:26:47.429723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:14.059 [2024-12-14 01:26:47.429746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:27:14.059 [2024-12-14 01:26:47.429766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.059 [2024-12-14 01:26:47.437443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.059 [2024-12-14 01:26:47.437487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:14.059 [2024-12-14 01:26:47.437498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.629 ms 00:27:14.059 [2024-12-14 01:26:47.437505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.059 [2024-12-14 01:26:47.441251] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:14.059 [2024-12-14 01:26:47.441305] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:14.059 [2024-12-14 01:26:47.441323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.059 [2024-12-14 01:26:47.441331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:14.059 [2024-12-14 01:26:47.441340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.708 ms 00:27:14.059 [2024-12-14 01:26:47.441347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.059 [2024-12-14 01:26:47.457137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.059 [2024-12-14 01:26:47.457185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:14.059 [2024-12-14 01:26:47.457198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.738 ms 00:27:14.059 [2024-12-14 01:26:47.457206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.059 [2024-12-14 01:26:47.460010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.059 [2024-12-14 01:26:47.460059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:14.059 [2024-12-14 01:26:47.460069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.752 ms 00:27:14.059 [2024-12-14 01:26:47.460076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.059 [2024-12-14 01:26:47.462548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.059 [2024-12-14 01:26:47.462594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:14.059 [2024-12-14 01:26:47.462605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.424 ms 00:27:14.059 [2024-12-14 01:26:47.462643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.059 [2024-12-14 01:26:47.463004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.059 [2024-12-14 01:26:47.463018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:14.059 [2024-12-14 01:26:47.463032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:27:14.059 [2024-12-14 01:26:47.463045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.059 [2024-12-14 01:26:47.486579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.059 [2024-12-14 01:26:47.486808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:14.059 [2024-12-14 01:26:47.486838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.514 ms 00:27:14.059 [2024-12-14 01:26:47.486847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.059 [2024-12-14 01:26:47.494816] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:14.059 [2024-12-14 01:26:47.497754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.059 [2024-12-14 01:26:47.497909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:14.059 [2024-12-14 01:26:47.497934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.866 ms 00:27:14.059 [2024-12-14 01:26:47.497944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.059 [2024-12-14 01:26:47.498026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.059 [2024-12-14 01:26:47.498039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:14.059 [2024-12-14 01:26:47.498055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:14.059 [2024-12-14 01:26:47.498063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.059 [2024-12-14 01:26:47.498809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.059 [2024-12-14 01:26:47.498853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:14.059 [2024-12-14 01:26:47.498864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:27:14.059 [2024-12-14 01:26:47.498871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.059 [2024-12-14 01:26:47.498897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.059 [2024-12-14 01:26:47.498906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:14.059 [2024-12-14 01:26:47.498915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:14.059 [2024-12-14 01:26:47.498922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.059 [2024-12-14 01:26:47.498966] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:14.059 [2024-12-14 01:26:47.498976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.059 [2024-12-14 01:26:47.498984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:14.059 [2024-12-14 01:26:47.498998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:14.059 [2024-12-14 01:26:47.499010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.059 [2024-12-14 01:26:47.504152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.060 [2024-12-14 01:26:47.504200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:14.060 [2024-12-14 01:26:47.504211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.124 ms 00:27:14.060 [2024-12-14 01:26:47.504219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.060 [2024-12-14 01:26:47.504295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.060 [2024-12-14 01:26:47.504305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:14.060 [2024-12-14 01:26:47.504314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:27:14.060 [2024-12-14 01:26:47.504330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.060 [2024-12-14 01:26:47.505422] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.707 ms, result 0 00:27:15.476  [2024-12-14T01:26:50.033Z] Copying: 11/1024 [MB] (11 MBps) [2024-12-14T01:26:50.977Z] Copying: 21/1024 [MB] (10 MBps) [2024-12-14T01:26:51.919Z] Copying: 32/1024 [MB] (10 MBps) [2024-12-14T01:26:52.862Z] Copying: 49/1024 [MB] (16 MBps) [2024-12-14T01:26:53.807Z] Copying: 61/1024 [MB] (12 MBps) [2024-12-14T01:26:54.751Z] Copying: 81/1024 [MB] (19 MBps) [2024-12-14T01:26:55.694Z] Copying: 105/1024 [MB] (23 MBps) [2024-12-14T01:26:57.080Z] Copying: 128/1024 [MB] (23 MBps) [2024-12-14T01:26:58.025Z] Copying: 150/1024 [MB] (21 MBps) [2024-12-14T01:26:58.968Z] Copying: 165/1024 [MB] (14 MBps) [2024-12-14T01:26:59.912Z] Copying: 179/1024 [MB] (14 MBps) [2024-12-14T01:27:00.854Z] Copying: 194/1024 [MB] (15 MBps) [2024-12-14T01:27:01.794Z] Copying: 208/1024 [MB] (13 MBps) [2024-12-14T01:27:02.738Z] Copying: 220/1024 [MB] (11 MBps) [2024-12-14T01:27:04.123Z] Copying: 230/1024 [MB] (10 MBps) [2024-12-14T01:27:04.696Z] Copying: 241/1024 [MB] (10 MBps) [2024-12-14T01:27:06.080Z] Copying: 252/1024 [MB] (10 MBps) [2024-12-14T01:27:07.025Z] Copying: 263/1024 [MB] (10 MBps) [2024-12-14T01:27:07.968Z] Copying: 274/1024 [MB] (11 MBps) [2024-12-14T01:27:08.913Z] Copying: 285/1024 [MB] (10 MBps) [2024-12-14T01:27:09.859Z] Copying: 295/1024 [MB] (10 MBps) [2024-12-14T01:27:10.803Z] Copying: 307/1024 [MB] (11 MBps) [2024-12-14T01:27:11.787Z] Copying: 318/1024 [MB] (11 MBps) [2024-12-14T01:27:12.767Z] Copying: 329/1024 [MB] (11 MBps) [2024-12-14T01:27:13.711Z] Copying: 340/1024 [MB] (10 MBps) [2024-12-14T01:27:15.098Z] Copying: 350/1024 [MB] (10 MBps) [2024-12-14T01:27:16.038Z] Copying: 361/1024 [MB] (10 MBps) [2024-12-14T01:27:16.979Z] Copying: 372/1024 [MB] (10 MBps) [2024-12-14T01:27:17.921Z] Copying: 402/1024 [MB] (30 MBps) [2024-12-14T01:27:18.864Z] Copying: 421/1024 [MB] (18 MBps) [2024-12-14T01:27:19.808Z] Copying: 431/1024 [MB] (10 MBps) [2024-12-14T01:27:20.751Z] Copying: 442/1024 [MB] (10 MBps) [2024-12-14T01:27:21.694Z] Copying: 452/1024 [MB] (10 MBps) [2024-12-14T01:27:23.081Z] Copying: 463/1024 [MB] (10 MBps) [2024-12-14T01:27:24.025Z] Copying: 473/1024 [MB] (10 MBps) [2024-12-14T01:27:24.967Z] Copying: 484/1024 [MB] (10 MBps) [2024-12-14T01:27:25.907Z] Copying: 515/1024 [MB] (31 MBps) [2024-12-14T01:27:26.851Z] Copying: 537/1024 [MB] (22 MBps) [2024-12-14T01:27:27.795Z] Copying: 557/1024 [MB] (19 MBps) [2024-12-14T01:27:28.737Z] Copying: 580/1024 [MB] (22 MBps) [2024-12-14T01:27:29.691Z] Copying: 598/1024 [MB] (17 MBps) [2024-12-14T01:27:31.076Z] Copying: 614/1024 [MB] (16 MBps) [2024-12-14T01:27:32.019Z] Copying: 644/1024 [MB] (30 MBps) [2024-12-14T01:27:32.963Z] Copying: 656/1024 [MB] (11 MBps) [2024-12-14T01:27:33.923Z] Copying: 676/1024 [MB] (19 MBps) [2024-12-14T01:27:34.914Z] Copying: 691/1024 [MB] (14 MBps) [2024-12-14T01:27:35.855Z] Copying: 707/1024 [MB] (16 MBps) [2024-12-14T01:27:36.798Z] Copying: 720/1024 [MB] (12 MBps) [2024-12-14T01:27:37.741Z] Copying: 734/1024 [MB] (14 MBps) [2024-12-14T01:27:38.684Z] Copying: 752/1024 [MB] (17 MBps) [2024-12-14T01:27:40.072Z] Copying: 774/1024 [MB] (21 MBps) [2024-12-14T01:27:41.016Z] Copying: 792/1024 [MB] (18 MBps) [2024-12-14T01:27:41.959Z] Copying: 803/1024 [MB] (10 MBps) [2024-12-14T01:27:42.900Z] Copying: 817/1024 [MB] (14 MBps) [2024-12-14T01:27:43.844Z] Copying: 830/1024 [MB] (12 MBps) [2024-12-14T01:27:44.788Z] Copying: 848/1024 [MB] (17 MBps) [2024-12-14T01:27:45.730Z] Copying: 858/1024 [MB] (10 MBps) [2024-12-14T01:27:47.118Z] Copying: 868/1024 [MB] (10 MBps) [2024-12-14T01:27:47.691Z] Copying: 879/1024 [MB] (10 MBps) [2024-12-14T01:27:49.078Z] Copying: 898/1024 [MB] (18 MBps) [2024-12-14T01:27:50.022Z] Copying: 917/1024 [MB] (18 MBps) [2024-12-14T01:27:50.966Z] Copying: 935/1024 [MB] (18 MBps) [2024-12-14T01:27:51.911Z] Copying: 946/1024 [MB] (10 MBps) [2024-12-14T01:27:52.853Z] Copying: 958/1024 [MB] (11 MBps) [2024-12-14T01:27:53.796Z] Copying: 974/1024 [MB] (15 MBps) [2024-12-14T01:27:54.740Z] Copying: 994/1024 [MB] (20 MBps) [2024-12-14T01:27:55.313Z] Copying: 1015/1024 [MB] (21 MBps) [2024-12-14T01:27:55.313Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-14 01:27:55.054114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.701 [2024-12-14 01:27:55.054387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:21.701 [2024-12-14 01:27:55.054415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:21.701 [2024-12-14 01:27:55.054435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.701 [2024-12-14 01:27:55.054468] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:21.701 [2024-12-14 01:27:55.055235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.701 [2024-12-14 01:27:55.055265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:21.701 [2024-12-14 01:27:55.055276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:28:21.701 [2024-12-14 01:27:55.055286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.701 [2024-12-14 01:27:55.055517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.701 [2024-12-14 01:27:55.055529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:21.701 [2024-12-14 01:27:55.055538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:28:21.701 [2024-12-14 01:27:55.055550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.701 [2024-12-14 01:27:55.059882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.701 [2024-12-14 01:27:55.059905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:21.701 [2024-12-14 01:27:55.059916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.317 ms 00:28:21.701 [2024-12-14 01:27:55.059924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.701 [2024-12-14 01:27:55.067308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.701 [2024-12-14 01:27:55.067484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:21.701 [2024-12-14 01:27:55.067551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.362 ms 00:28:21.701 [2024-12-14 01:27:55.067575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.701 [2024-12-14 01:27:55.070752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.701 [2024-12-14 01:27:55.071078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:21.701 [2024-12-14 01:27:55.071157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.065 ms 00:28:21.701 [2024-12-14 01:27:55.071181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.701 [2024-12-14 01:27:55.076076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.701 [2024-12-14 01:27:55.076261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:21.701 [2024-12-14 01:27:55.076763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.843 ms 00:28:21.701 [2024-12-14 01:27:55.076789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.701 [2024-12-14 01:27:55.079197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.701 [2024-12-14 01:27:55.079349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:21.701 [2024-12-14 01:27:55.079414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.324 ms 00:28:21.701 [2024-12-14 01:27:55.079449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.701 [2024-12-14 01:27:55.083353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.701 [2024-12-14 01:27:55.083509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:21.701 [2024-12-14 01:27:55.083564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.869 ms 00:28:21.701 [2024-12-14 01:27:55.083587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.701 [2024-12-14 01:27:55.086738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.701 [2024-12-14 01:27:55.086891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:21.701 [2024-12-14 01:27:55.086945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.087 ms 00:28:21.701 [2024-12-14 01:27:55.086967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.701 [2024-12-14 01:27:55.089505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.701 [2024-12-14 01:27:55.089694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:21.701 [2024-12-14 01:27:55.089754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.460 ms 00:28:21.701 [2024-12-14 01:27:55.089777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.701 [2024-12-14 01:27:55.091907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.701 [2024-12-14 01:27:55.092054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:21.701 [2024-12-14 01:27:55.092108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.051 ms 00:28:21.701 [2024-12-14 01:27:55.092130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.701 [2024-12-14 01:27:55.092203] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:21.701 [2024-12-14 01:27:55.092245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:21.701 [2024-12-14 01:27:55.092277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:21.701 [2024-12-14 01:27:55.092307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:21.701 [2024-12-14 01:27:55.092336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:21.701 [2024-12-14 01:27:55.092412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:21.701 [2024-12-14 01:27:55.092442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:21.701 [2024-12-14 01:27:55.092470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:21.701 [2024-12-14 01:27:55.092499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:21.701 [2024-12-14 01:27:55.092562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:21.701 [2024-12-14 01:27:55.092592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:21.701 [2024-12-14 01:27:55.092635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:21.701 [2024-12-14 01:27:55.092667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:21.701 [2024-12-14 01:27:55.092721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:21.701 [2024-12-14 01:27:55.092754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.092782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.092812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.092841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.092908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.092940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.092969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.092998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.093993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.094002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.094012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.094020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.094027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.094035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.094042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.094049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.094057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.094064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.094072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.094079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.094086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.094094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.094101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.094109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.094116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.094123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.094131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.094138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.094146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.094153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:21.702 [2024-12-14 01:27:55.094169] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:21.702 [2024-12-14 01:27:55.094177] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 38b9c08e-308c-4ea7-a0bc-535ef4b9d616 00:28:21.702 [2024-12-14 01:27:55.094186] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:21.702 [2024-12-14 01:27:55.094194] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:21.702 [2024-12-14 01:27:55.094201] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:21.703 [2024-12-14 01:27:55.094210] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:21.703 [2024-12-14 01:27:55.094217] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:21.703 [2024-12-14 01:27:55.094225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:21.703 [2024-12-14 01:27:55.094233] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:21.703 [2024-12-14 01:27:55.094239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:21.703 [2024-12-14 01:27:55.094245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:21.703 [2024-12-14 01:27:55.094255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.703 [2024-12-14 01:27:55.094278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:21.703 [2024-12-14 01:27:55.094287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.054 ms 00:28:21.703 [2024-12-14 01:27:55.094295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.703 [2024-12-14 01:27:55.096690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.703 [2024-12-14 01:27:55.096723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:21.703 [2024-12-14 01:27:55.096744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.372 ms 00:28:21.703 [2024-12-14 01:27:55.096756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.703 [2024-12-14 01:27:55.096888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.703 [2024-12-14 01:27:55.096900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:21.703 [2024-12-14 01:27:55.096910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:28:21.703 [2024-12-14 01:27:55.096918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.703 [2024-12-14 01:27:55.104498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.703 [2024-12-14 01:27:55.104702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:21.703 [2024-12-14 01:27:55.104722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.703 [2024-12-14 01:27:55.104739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.703 [2024-12-14 01:27:55.104801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.703 [2024-12-14 01:27:55.104811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:21.703 [2024-12-14 01:27:55.104825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.703 [2024-12-14 01:27:55.104833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.703 [2024-12-14 01:27:55.104900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.703 [2024-12-14 01:27:55.104910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:21.703 [2024-12-14 01:27:55.104920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.703 [2024-12-14 01:27:55.104932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.703 [2024-12-14 01:27:55.104952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.703 [2024-12-14 01:27:55.104961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:21.703 [2024-12-14 01:27:55.104969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.703 [2024-12-14 01:27:55.104976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.703 [2024-12-14 01:27:55.118881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.703 [2024-12-14 01:27:55.118920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:21.703 [2024-12-14 01:27:55.118930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.703 [2024-12-14 01:27:55.118944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.703 [2024-12-14 01:27:55.129509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.703 [2024-12-14 01:27:55.129549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:21.703 [2024-12-14 01:27:55.129561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.703 [2024-12-14 01:27:55.129570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.703 [2024-12-14 01:27:55.129657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.703 [2024-12-14 01:27:55.129668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:21.703 [2024-12-14 01:27:55.129677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.703 [2024-12-14 01:27:55.129686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.703 [2024-12-14 01:27:55.129722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.703 [2024-12-14 01:27:55.129737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:21.703 [2024-12-14 01:27:55.129746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.703 [2024-12-14 01:27:55.129759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.703 [2024-12-14 01:27:55.129836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.703 [2024-12-14 01:27:55.129846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:21.703 [2024-12-14 01:27:55.129855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.703 [2024-12-14 01:27:55.129863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.703 [2024-12-14 01:27:55.129899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.703 [2024-12-14 01:27:55.129912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:21.703 [2024-12-14 01:27:55.129920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.703 [2024-12-14 01:27:55.129928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.703 [2024-12-14 01:27:55.129968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.703 [2024-12-14 01:27:55.129978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:21.703 [2024-12-14 01:27:55.129986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.703 [2024-12-14 01:27:55.129998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.703 [2024-12-14 01:27:55.130048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.703 [2024-12-14 01:27:55.130060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:21.703 [2024-12-14 01:27:55.130069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.703 [2024-12-14 01:27:55.130084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.703 [2024-12-14 01:27:55.130213] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 76.066 ms, result 0 00:28:21.964 00:28:21.964 00:28:21.964 01:27:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:24.538 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:28:24.538 01:27:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:28:24.538 01:27:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:28:24.538 01:27:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:24.538 01:27:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:24.538 01:27:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:24.538 01:27:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:24.538 01:27:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:24.538 01:27:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 92583 00:28:24.538 01:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 92583 ']' 00:28:24.538 Process with pid 92583 is not found 00:28:24.538 01:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 92583 00:28:24.538 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (92583) - No such process 00:28:24.538 01:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 92583 is not found' 00:28:24.538 01:27:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:28:24.538 Remove shared memory files 00:28:24.538 01:27:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:28:24.538 01:27:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:24.538 01:27:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:24.538 01:27:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:24.798 01:27:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:28:24.798 01:27:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:24.798 01:27:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:24.798 ************************************ 00:28:24.798 END TEST ftl_dirty_shutdown 00:28:24.798 ************************************ 00:28:24.798 00:28:24.798 real 3m55.947s 00:28:24.798 user 4m25.423s 00:28:24.798 sys 0m28.431s 00:28:24.798 01:27:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:24.798 01:27:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:24.798 01:27:58 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:24.798 01:27:58 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:24.799 01:27:58 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:24.799 01:27:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:24.799 ************************************ 00:28:24.799 START TEST ftl_upgrade_shutdown 00:28:24.799 ************************************ 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:24.799 * Looking for test storage... 00:28:24.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:24.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.799 --rc genhtml_branch_coverage=1 00:28:24.799 --rc genhtml_function_coverage=1 00:28:24.799 --rc genhtml_legend=1 00:28:24.799 --rc geninfo_all_blocks=1 00:28:24.799 --rc geninfo_unexecuted_blocks=1 00:28:24.799 00:28:24.799 ' 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:24.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.799 --rc genhtml_branch_coverage=1 00:28:24.799 --rc genhtml_function_coverage=1 00:28:24.799 --rc genhtml_legend=1 00:28:24.799 --rc geninfo_all_blocks=1 00:28:24.799 --rc geninfo_unexecuted_blocks=1 00:28:24.799 00:28:24.799 ' 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:24.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.799 --rc genhtml_branch_coverage=1 00:28:24.799 --rc genhtml_function_coverage=1 00:28:24.799 --rc genhtml_legend=1 00:28:24.799 --rc geninfo_all_blocks=1 00:28:24.799 --rc geninfo_unexecuted_blocks=1 00:28:24.799 00:28:24.799 ' 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:24.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.799 --rc genhtml_branch_coverage=1 00:28:24.799 --rc genhtml_function_coverage=1 00:28:24.799 --rc genhtml_legend=1 00:28:24.799 --rc geninfo_all_blocks=1 00:28:24.799 --rc geninfo_unexecuted_blocks=1 00:28:24.799 00:28:24.799 ' 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:24.799 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95139 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95139 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 95139 ']' 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:25.060 01:27:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:25.060 [2024-12-14 01:27:58.517697] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:25.060 [2024-12-14 01:27:58.517849] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95139 ] 00:28:25.060 [2024-12-14 01:27:58.662159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.321 [2024-12-14 01:27:58.691366] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:25.893 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:28:26.155 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:28:26.155 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:26.155 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:28:26.155 01:27:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:28:26.155 01:27:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:26.155 01:27:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:26.155 01:27:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:26.155 01:27:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:28:26.417 01:27:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:26.417 { 00:28:26.417 "name": "basen1", 00:28:26.417 "aliases": [ 00:28:26.417 "aa1a79cd-718f-4b55-a05b-50558f20dda2" 00:28:26.417 ], 00:28:26.417 "product_name": "NVMe disk", 00:28:26.417 "block_size": 4096, 00:28:26.417 "num_blocks": 1310720, 00:28:26.417 "uuid": "aa1a79cd-718f-4b55-a05b-50558f20dda2", 00:28:26.417 "numa_id": -1, 00:28:26.417 "assigned_rate_limits": { 00:28:26.417 "rw_ios_per_sec": 0, 00:28:26.417 "rw_mbytes_per_sec": 0, 00:28:26.417 "r_mbytes_per_sec": 0, 00:28:26.417 "w_mbytes_per_sec": 0 00:28:26.417 }, 00:28:26.417 "claimed": true, 00:28:26.417 "claim_type": "read_many_write_one", 00:28:26.417 "zoned": false, 00:28:26.417 "supported_io_types": { 00:28:26.417 "read": true, 00:28:26.417 "write": true, 00:28:26.417 "unmap": true, 00:28:26.417 "flush": true, 00:28:26.417 "reset": true, 00:28:26.417 "nvme_admin": true, 00:28:26.417 "nvme_io": true, 00:28:26.417 "nvme_io_md": false, 00:28:26.417 "write_zeroes": true, 00:28:26.417 "zcopy": false, 00:28:26.417 "get_zone_info": false, 00:28:26.417 "zone_management": false, 00:28:26.417 "zone_append": false, 00:28:26.417 "compare": true, 00:28:26.417 "compare_and_write": false, 00:28:26.417 "abort": true, 00:28:26.417 "seek_hole": false, 00:28:26.417 "seek_data": false, 00:28:26.417 "copy": true, 00:28:26.417 "nvme_iov_md": false 00:28:26.417 }, 00:28:26.417 "driver_specific": { 00:28:26.417 "nvme": [ 00:28:26.417 { 00:28:26.417 "pci_address": "0000:00:11.0", 00:28:26.417 "trid": { 00:28:26.417 "trtype": "PCIe", 00:28:26.417 "traddr": "0000:00:11.0" 00:28:26.417 }, 00:28:26.417 "ctrlr_data": { 00:28:26.417 "cntlid": 0, 00:28:26.417 "vendor_id": "0x1b36", 00:28:26.417 "model_number": "QEMU NVMe Ctrl", 00:28:26.417 "serial_number": "12341", 00:28:26.417 "firmware_revision": "8.0.0", 00:28:26.417 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:26.417 "oacs": { 00:28:26.417 "security": 0, 00:28:26.417 "format": 1, 00:28:26.417 "firmware": 0, 00:28:26.417 "ns_manage": 1 00:28:26.417 }, 00:28:26.417 "multi_ctrlr": false, 00:28:26.417 "ana_reporting": false 00:28:26.417 }, 00:28:26.417 "vs": { 00:28:26.417 "nvme_version": "1.4" 00:28:26.417 }, 00:28:26.417 "ns_data": { 00:28:26.417 "id": 1, 00:28:26.417 "can_share": false 00:28:26.417 } 00:28:26.417 } 00:28:26.417 ], 00:28:26.417 "mp_policy": "active_passive" 00:28:26.417 } 00:28:26.417 } 00:28:26.417 ]' 00:28:26.417 01:27:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:26.417 01:27:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:26.417 01:27:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:26.417 01:27:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:26.417 01:27:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:26.417 01:27:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:28:26.417 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:26.417 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:28:26.417 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:26.417 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:26.417 01:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:26.679 01:28:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=60166dcf-c8f1-4d4a-88af-d4b046f75c24 00:28:26.679 01:28:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:26.679 01:28:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 60166dcf-c8f1-4d4a-88af-d4b046f75c24 00:28:26.941 01:28:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:28:27.202 01:28:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=15367675-d63f-4aef-9c2c-a96715ec7b42 00:28:27.202 01:28:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 15367675-d63f-4aef-9c2c-a96715ec7b42 00:28:27.463 01:28:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=806dbb34-d20d-4c85-bc2b-4cf8c3d0ab2b 00:28:27.463 01:28:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 806dbb34-d20d-4c85-bc2b-4cf8c3d0ab2b ]] 00:28:27.463 01:28:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 806dbb34-d20d-4c85-bc2b-4cf8c3d0ab2b 5120 00:28:27.463 01:28:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:28:27.463 01:28:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:27.463 01:28:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=806dbb34-d20d-4c85-bc2b-4cf8c3d0ab2b 00:28:27.463 01:28:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:28:27.463 01:28:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 806dbb34-d20d-4c85-bc2b-4cf8c3d0ab2b 00:28:27.463 01:28:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=806dbb34-d20d-4c85-bc2b-4cf8c3d0ab2b 00:28:27.463 01:28:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:27.463 01:28:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:27.463 01:28:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:27.463 01:28:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 806dbb34-d20d-4c85-bc2b-4cf8c3d0ab2b 00:28:27.725 01:28:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:27.725 { 00:28:27.725 "name": "806dbb34-d20d-4c85-bc2b-4cf8c3d0ab2b", 00:28:27.725 "aliases": [ 00:28:27.725 "lvs/basen1p0" 00:28:27.725 ], 00:28:27.725 "product_name": "Logical Volume", 00:28:27.725 "block_size": 4096, 00:28:27.725 "num_blocks": 5242880, 00:28:27.725 "uuid": "806dbb34-d20d-4c85-bc2b-4cf8c3d0ab2b", 00:28:27.725 "assigned_rate_limits": { 00:28:27.725 "rw_ios_per_sec": 0, 00:28:27.725 "rw_mbytes_per_sec": 0, 00:28:27.725 "r_mbytes_per_sec": 0, 00:28:27.725 "w_mbytes_per_sec": 0 00:28:27.725 }, 00:28:27.725 "claimed": false, 00:28:27.725 "zoned": false, 00:28:27.725 "supported_io_types": { 00:28:27.725 "read": true, 00:28:27.725 "write": true, 00:28:27.725 "unmap": true, 00:28:27.725 "flush": false, 00:28:27.725 "reset": true, 00:28:27.725 "nvme_admin": false, 00:28:27.725 "nvme_io": false, 00:28:27.725 "nvme_io_md": false, 00:28:27.725 "write_zeroes": true, 00:28:27.725 "zcopy": false, 00:28:27.725 "get_zone_info": false, 00:28:27.725 "zone_management": false, 00:28:27.725 "zone_append": false, 00:28:27.725 "compare": false, 00:28:27.725 "compare_and_write": false, 00:28:27.725 "abort": false, 00:28:27.725 "seek_hole": true, 00:28:27.725 "seek_data": true, 00:28:27.725 "copy": false, 00:28:27.725 "nvme_iov_md": false 00:28:27.725 }, 00:28:27.725 "driver_specific": { 00:28:27.725 "lvol": { 00:28:27.725 "lvol_store_uuid": "15367675-d63f-4aef-9c2c-a96715ec7b42", 00:28:27.725 "base_bdev": "basen1", 00:28:27.725 "thin_provision": true, 00:28:27.725 "num_allocated_clusters": 0, 00:28:27.725 "snapshot": false, 00:28:27.725 "clone": false, 00:28:27.725 "esnap_clone": false 00:28:27.725 } 00:28:27.725 } 00:28:27.725 } 00:28:27.725 ]' 00:28:27.725 01:28:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:27.725 01:28:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:27.725 01:28:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:27.725 01:28:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:28:27.725 01:28:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:28:27.725 01:28:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:28:27.725 01:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:28:27.725 01:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:27.725 01:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:28:27.986 01:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:28:27.986 01:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:28:27.986 01:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:28:28.248 01:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:28:28.248 01:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:28:28.248 01:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 806dbb34-d20d-4c85-bc2b-4cf8c3d0ab2b -c cachen1p0 --l2p_dram_limit 2 00:28:28.248 [2024-12-14 01:28:01.844896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.248 [2024-12-14 01:28:01.845150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:28.248 [2024-12-14 01:28:01.845176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:28.248 [2024-12-14 01:28:01.845188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.248 [2024-12-14 01:28:01.845268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.248 [2024-12-14 01:28:01.845287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:28.248 [2024-12-14 01:28:01.845302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:28:28.248 [2024-12-14 01:28:01.845316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.248 [2024-12-14 01:28:01.845341] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:28.248 [2024-12-14 01:28:01.845730] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:28.248 [2024-12-14 01:28:01.845763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.248 [2024-12-14 01:28:01.845786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:28.248 [2024-12-14 01:28:01.845801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.428 ms 00:28:28.248 [2024-12-14 01:28:01.845815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.248 [2024-12-14 01:28:01.845957] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID f8d2383c-aa2e-4805-87f5-b787e5398960 00:28:28.248 [2024-12-14 01:28:01.847716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.248 [2024-12-14 01:28:01.847763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:28:28.248 [2024-12-14 01:28:01.847780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:28.248 [2024-12-14 01:28:01.847790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.248 [2024-12-14 01:28:01.856714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.248 [2024-12-14 01:28:01.856762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:28.248 [2024-12-14 01:28:01.856776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.869 ms 00:28:28.248 [2024-12-14 01:28:01.856784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.248 [2024-12-14 01:28:01.856846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.248 [2024-12-14 01:28:01.856855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:28.248 [2024-12-14 01:28:01.856866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:28:28.248 [2024-12-14 01:28:01.856874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.248 [2024-12-14 01:28:01.856942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.248 [2024-12-14 01:28:01.856953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:28.248 [2024-12-14 01:28:01.856967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:28.248 [2024-12-14 01:28:01.856975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.248 [2024-12-14 01:28:01.857005] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:28.510 [2024-12-14 01:28:01.859375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.510 [2024-12-14 01:28:01.859560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:28.510 [2024-12-14 01:28:01.859579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.378 ms 00:28:28.510 [2024-12-14 01:28:01.859591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.510 [2024-12-14 01:28:01.859653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.510 [2024-12-14 01:28:01.859672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:28.510 [2024-12-14 01:28:01.859683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:28.510 [2024-12-14 01:28:01.859697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.510 [2024-12-14 01:28:01.859717] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:28:28.510 [2024-12-14 01:28:01.859885] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:28.510 [2024-12-14 01:28:01.859898] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:28.510 [2024-12-14 01:28:01.859912] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:28.510 [2024-12-14 01:28:01.859923] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:28.510 [2024-12-14 01:28:01.859939] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:28.510 [2024-12-14 01:28:01.859948] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:28.510 [2024-12-14 01:28:01.859963] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:28.510 [2024-12-14 01:28:01.859970] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:28.510 [2024-12-14 01:28:01.859981] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:28.510 [2024-12-14 01:28:01.859992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.510 [2024-12-14 01:28:01.860002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:28.510 [2024-12-14 01:28:01.860010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.278 ms 00:28:28.510 [2024-12-14 01:28:01.860020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.510 [2024-12-14 01:28:01.860105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.510 [2024-12-14 01:28:01.860118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:28.510 [2024-12-14 01:28:01.860125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:28:28.510 [2024-12-14 01:28:01.860139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.510 [2024-12-14 01:28:01.860239] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:28.510 [2024-12-14 01:28:01.860252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:28.510 [2024-12-14 01:28:01.860260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:28.510 [2024-12-14 01:28:01.860271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.510 [2024-12-14 01:28:01.860279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:28.510 [2024-12-14 01:28:01.860288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:28.510 [2024-12-14 01:28:01.860295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:28.511 [2024-12-14 01:28:01.860304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:28.511 [2024-12-14 01:28:01.860311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:28.511 [2024-12-14 01:28:01.860320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.511 [2024-12-14 01:28:01.860326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:28.511 [2024-12-14 01:28:01.860334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:28.511 [2024-12-14 01:28:01.860343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.511 [2024-12-14 01:28:01.860354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:28.511 [2024-12-14 01:28:01.860361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:28.511 [2024-12-14 01:28:01.860370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.511 [2024-12-14 01:28:01.860376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:28.511 [2024-12-14 01:28:01.860384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:28.511 [2024-12-14 01:28:01.860392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.511 [2024-12-14 01:28:01.860401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:28.511 [2024-12-14 01:28:01.860408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:28.511 [2024-12-14 01:28:01.860417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:28.511 [2024-12-14 01:28:01.860427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:28.511 [2024-12-14 01:28:01.860437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:28.511 [2024-12-14 01:28:01.860443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:28.511 [2024-12-14 01:28:01.860454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:28.511 [2024-12-14 01:28:01.860461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:28.511 [2024-12-14 01:28:01.860469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:28.511 [2024-12-14 01:28:01.860476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:28.511 [2024-12-14 01:28:01.860487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:28.511 [2024-12-14 01:28:01.860493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:28.511 [2024-12-14 01:28:01.860502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:28.511 [2024-12-14 01:28:01.860509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:28.511 [2024-12-14 01:28:01.860518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.511 [2024-12-14 01:28:01.860525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:28.511 [2024-12-14 01:28:01.860533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:28.511 [2024-12-14 01:28:01.860539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.511 [2024-12-14 01:28:01.860548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:28.511 [2024-12-14 01:28:01.860556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:28.511 [2024-12-14 01:28:01.860564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.511 [2024-12-14 01:28:01.860571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:28.511 [2024-12-14 01:28:01.860579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:28.511 [2024-12-14 01:28:01.860585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.511 [2024-12-14 01:28:01.860594] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:28.511 [2024-12-14 01:28:01.860602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:28.511 [2024-12-14 01:28:01.860613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:28.511 [2024-12-14 01:28:01.860649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.511 [2024-12-14 01:28:01.860660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:28.511 [2024-12-14 01:28:01.860676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:28.511 [2024-12-14 01:28:01.860685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:28.511 [2024-12-14 01:28:01.860692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:28.511 [2024-12-14 01:28:01.860701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:28.511 [2024-12-14 01:28:01.860708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:28.511 [2024-12-14 01:28:01.860719] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:28.511 [2024-12-14 01:28:01.860733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:28.511 [2024-12-14 01:28:01.860748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:28.511 [2024-12-14 01:28:01.860755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:28.511 [2024-12-14 01:28:01.860765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:28.511 [2024-12-14 01:28:01.860773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:28.511 [2024-12-14 01:28:01.860782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:28.511 [2024-12-14 01:28:01.860790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:28.511 [2024-12-14 01:28:01.860804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:28.511 [2024-12-14 01:28:01.860815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:28.511 [2024-12-14 01:28:01.860829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:28.511 [2024-12-14 01:28:01.860840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:28.511 [2024-12-14 01:28:01.860856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:28.511 [2024-12-14 01:28:01.860870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:28.511 [2024-12-14 01:28:01.860890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:28.511 [2024-12-14 01:28:01.860902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:28.511 [2024-12-14 01:28:01.860916] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:28.511 [2024-12-14 01:28:01.860929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:28.511 [2024-12-14 01:28:01.860945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:28.511 [2024-12-14 01:28:01.860956] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:28.511 [2024-12-14 01:28:01.860970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:28.511 [2024-12-14 01:28:01.860980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:28.511 [2024-12-14 01:28:01.860997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.511 [2024-12-14 01:28:01.861010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:28.511 [2024-12-14 01:28:01.861031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.822 ms 00:28:28.511 [2024-12-14 01:28:01.861045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.511 [2024-12-14 01:28:01.861146] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:28.511 [2024-12-14 01:28:01.861166] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:31.815 [2024-12-14 01:28:05.283454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.815 [2024-12-14 01:28:05.283550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:31.815 [2024-12-14 01:28:05.283573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3422.290 ms 00:28:31.815 [2024-12-14 01:28:05.283583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.815 [2024-12-14 01:28:05.301875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.815 [2024-12-14 01:28:05.302154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:31.815 [2024-12-14 01:28:05.302187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.131 ms 00:28:31.815 [2024-12-14 01:28:05.302199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.815 [2024-12-14 01:28:05.302269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.815 [2024-12-14 01:28:05.302279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:31.815 [2024-12-14 01:28:05.302291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:31.815 [2024-12-14 01:28:05.302300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.815 [2024-12-14 01:28:05.319275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.815 [2024-12-14 01:28:05.319339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:31.815 [2024-12-14 01:28:05.319356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.898 ms 00:28:31.815 [2024-12-14 01:28:05.319376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.815 [2024-12-14 01:28:05.319419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.815 [2024-12-14 01:28:05.319430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:31.815 [2024-12-14 01:28:05.319442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:31.815 [2024-12-14 01:28:05.319450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.815 [2024-12-14 01:28:05.320183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.815 [2024-12-14 01:28:05.320223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:31.815 [2024-12-14 01:28:05.320238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.665 ms 00:28:31.815 [2024-12-14 01:28:05.320249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.815 [2024-12-14 01:28:05.320313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.815 [2024-12-14 01:28:05.320324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:31.815 [2024-12-14 01:28:05.320335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:28:31.815 [2024-12-14 01:28:05.320344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.815 [2024-12-14 01:28:05.331919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.815 [2024-12-14 01:28:05.332140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:31.815 [2024-12-14 01:28:05.332165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.545 ms 00:28:31.815 [2024-12-14 01:28:05.332175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.815 [2024-12-14 01:28:05.354286] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:31.815 [2024-12-14 01:28:05.356030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.815 [2024-12-14 01:28:05.356276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:31.815 [2024-12-14 01:28:05.356302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.734 ms 00:28:31.815 [2024-12-14 01:28:05.356318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.815 [2024-12-14 01:28:05.374997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.815 [2024-12-14 01:28:05.375212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:28:31.815 [2024-12-14 01:28:05.375239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.626 ms 00:28:31.815 [2024-12-14 01:28:05.375255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.815 [2024-12-14 01:28:05.375370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.815 [2024-12-14 01:28:05.375387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:31.815 [2024-12-14 01:28:05.375397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:28:31.815 [2024-12-14 01:28:05.375408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.815 [2024-12-14 01:28:05.381149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.815 [2024-12-14 01:28:05.381213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:28:31.815 [2024-12-14 01:28:05.381229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.717 ms 00:28:31.815 [2024-12-14 01:28:05.381240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.815 [2024-12-14 01:28:05.386615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.815 [2024-12-14 01:28:05.386690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:28:31.815 [2024-12-14 01:28:05.386702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.320 ms 00:28:31.815 [2024-12-14 01:28:05.386714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.815 [2024-12-14 01:28:05.387049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.815 [2024-12-14 01:28:05.387074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:31.815 [2024-12-14 01:28:05.387085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.282 ms 00:28:31.815 [2024-12-14 01:28:05.387099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.076 [2024-12-14 01:28:05.434847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.076 [2024-12-14 01:28:05.434915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:28:32.076 [2024-12-14 01:28:05.434933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.705 ms 00:28:32.076 [2024-12-14 01:28:05.434946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.076 [2024-12-14 01:28:05.443524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.076 [2024-12-14 01:28:05.443589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:28:32.076 [2024-12-14 01:28:05.443604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.502 ms 00:28:32.076 [2024-12-14 01:28:05.443617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.076 [2024-12-14 01:28:05.450481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.076 [2024-12-14 01:28:05.450543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:28:32.076 [2024-12-14 01:28:05.450561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.776 ms 00:28:32.076 [2024-12-14 01:28:05.450572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.076 [2024-12-14 01:28:05.460167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.076 [2024-12-14 01:28:05.460264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:32.076 [2024-12-14 01:28:05.460283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.514 ms 00:28:32.076 [2024-12-14 01:28:05.460299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.076 [2024-12-14 01:28:05.460375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.076 [2024-12-14 01:28:05.460389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:32.076 [2024-12-14 01:28:05.460401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:32.076 [2024-12-14 01:28:05.460413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.076 [2024-12-14 01:28:05.460532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.076 [2024-12-14 01:28:05.460548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:32.076 [2024-12-14 01:28:05.460558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:28:32.076 [2024-12-14 01:28:05.460573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.076 [2024-12-14 01:28:05.462153] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3616.683 ms, result 0 00:28:32.076 { 00:28:32.076 "name": "ftl", 00:28:32.076 "uuid": "f8d2383c-aa2e-4805-87f5-b787e5398960" 00:28:32.076 } 00:28:32.076 01:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:28:32.337 [2024-12-14 01:28:05.766930] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.337 01:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:28:32.598 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:28:32.598 [2024-12-14 01:28:06.203469] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:32.859 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:28:32.859 [2024-12-14 01:28:06.411833] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:32.859 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:33.430 Fill FTL, iteration 1 00:28:33.430 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:28:33.430 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:28:33.430 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:28:33.430 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:28:33.430 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:28:33.430 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:28:33.430 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:28:33.430 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:28:33.430 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:28:33.430 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:33.430 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:28:33.430 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:33.430 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:33.430 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:33.430 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:33.430 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:28:33.430 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=95261 00:28:33.430 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:28:33.430 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:28:33.430 01:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 95261 /var/tmp/spdk.tgt.sock 00:28:33.430 01:28:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 95261 ']' 00:28:33.431 01:28:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:28:33.431 01:28:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.431 01:28:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:28:33.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:28:33.431 01:28:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.431 01:28:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:33.431 [2024-12-14 01:28:06.863097] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:33.431 [2024-12-14 01:28:06.863517] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95261 ] 00:28:33.431 [2024-12-14 01:28:07.009310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.431 [2024-12-14 01:28:07.038250] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.373 01:28:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:34.373 01:28:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:34.373 01:28:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:28:34.634 ftln1 00:28:34.634 01:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:28:34.634 01:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:34.634 01:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:28:34.634 01:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 95261 00:28:34.634 01:28:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 95261 ']' 00:28:34.634 01:28:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 95261 00:28:34.634 01:28:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:34.634 01:28:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:34.634 01:28:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95261 00:28:34.895 killing process with pid 95261 00:28:34.895 01:28:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:34.895 01:28:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:34.895 01:28:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95261' 00:28:34.895 01:28:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 95261 00:28:34.895 01:28:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 95261 00:28:35.156 01:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:35.156 01:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:35.156 [2024-12-14 01:28:08.606796] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:35.156 [2024-12-14 01:28:08.607519] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95293 ] 00:28:35.156 [2024-12-14 01:28:08.749416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.417 [2024-12-14 01:28:08.771281] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.801  [2024-12-14T01:28:10.986Z] Copying: 181/1024 [MB] (181 MBps) [2024-12-14T01:28:12.374Z] Copying: 362/1024 [MB] (181 MBps) [2024-12-14T01:28:13.318Z] Copying: 562/1024 [MB] (200 MBps) [2024-12-14T01:28:14.260Z] Copying: 742/1024 [MB] (180 MBps) [2024-12-14T01:28:14.522Z] Copying: 933/1024 [MB] (191 MBps) [2024-12-14T01:28:14.783Z] Copying: 1024/1024 [MB] (average 185 MBps) 00:28:41.171 00:28:41.171 Calculate MD5 checksum, iteration 1 00:28:41.171 01:28:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:41.171 01:28:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:41.171 01:28:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:41.171 01:28:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:41.171 01:28:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:41.171 01:28:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:41.171 01:28:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:41.171 01:28:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:41.432 [2024-12-14 01:28:14.839027] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:41.432 [2024-12-14 01:28:14.839366] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95361 ] 00:28:41.432 [2024-12-14 01:28:14.982280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.432 [2024-12-14 01:28:15.021128] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.822  [2024-12-14T01:28:17.370Z] Copying: 528/1024 [MB] (528 MBps) [2024-12-14T01:28:17.370Z] Copying: 1024/1024 [MB] (average 541 MBps) 00:28:43.758 00:28:43.758 01:28:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:43.758 01:28:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:45.659 01:28:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:45.659 Fill FTL, iteration 2 00:28:45.659 01:28:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=1596f94e2443ce6b92ca67631544f09c 00:28:45.659 01:28:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:45.659 01:28:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:45.659 01:28:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:45.659 01:28:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:45.659 01:28:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:45.659 01:28:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:45.659 01:28:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:45.659 01:28:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:45.659 01:28:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:45.659 [2024-12-14 01:28:19.076642] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:45.659 [2024-12-14 01:28:19.076760] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95407 ] 00:28:45.659 [2024-12-14 01:28:19.217817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.659 [2024-12-14 01:28:19.242324] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.080  [2024-12-14T01:28:21.645Z] Copying: 242/1024 [MB] (242 MBps) [2024-12-14T01:28:22.579Z] Copying: 494/1024 [MB] (252 MBps) [2024-12-14T01:28:23.513Z] Copying: 737/1024 [MB] (243 MBps) [2024-12-14T01:28:23.771Z] Copying: 976/1024 [MB] (239 MBps) [2024-12-14T01:28:24.032Z] Copying: 1024/1024 [MB] (average 243 MBps) 00:28:50.420 00:28:50.421 01:28:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:50.421 01:28:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:50.421 Calculate MD5 checksum, iteration 2 00:28:50.421 01:28:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:50.421 01:28:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:50.421 01:28:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:50.421 01:28:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:50.421 01:28:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:50.421 01:28:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:50.421 [2024-12-14 01:28:23.892071] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:50.421 [2024-12-14 01:28:23.893039] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95460 ] 00:28:50.682 [2024-12-14 01:28:24.048751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.682 [2024-12-14 01:28:24.075022] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.065  [2024-12-14T01:28:26.616Z] Copying: 519/1024 [MB] (519 MBps) [2024-12-14T01:28:26.616Z] Copying: 1023/1024 [MB] (504 MBps) [2024-12-14T01:28:29.161Z] Copying: 1024/1024 [MB] (average 511 MBps) 00:28:55.549 00:28:55.810 01:28:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:55.810 01:28:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:58.358 01:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:58.359 01:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=0aa4ebeb69a10815e3299ed893fd3dcf 00:28:58.359 01:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:58.359 01:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:58.359 01:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:58.359 [2024-12-14 01:28:31.535468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.359 [2024-12-14 01:28:31.535512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:58.359 [2024-12-14 01:28:31.535528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:58.359 [2024-12-14 01:28:31.535537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.359 [2024-12-14 01:28:31.535558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.359 [2024-12-14 01:28:31.535566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:58.359 [2024-12-14 01:28:31.535572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:58.359 [2024-12-14 01:28:31.535578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.359 [2024-12-14 01:28:31.535593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.359 [2024-12-14 01:28:31.535603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:58.359 [2024-12-14 01:28:31.535609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:58.359 [2024-12-14 01:28:31.535617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.359 [2024-12-14 01:28:31.535679] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.200 ms, result 0 00:28:58.359 true 00:28:58.359 01:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:58.359 { 00:28:58.359 "name": "ftl", 00:28:58.359 "properties": [ 00:28:58.359 { 00:28:58.359 "name": "superblock_version", 00:28:58.359 "value": 5, 00:28:58.359 "read-only": true 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "name": "base_device", 00:28:58.359 "bands": [ 00:28:58.359 { 00:28:58.359 "id": 0, 00:28:58.359 "state": "FREE", 00:28:58.359 "validity": 0.0 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "id": 1, 00:28:58.359 "state": "FREE", 00:28:58.359 "validity": 0.0 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "id": 2, 00:28:58.359 "state": "FREE", 00:28:58.359 "validity": 0.0 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "id": 3, 00:28:58.359 "state": "FREE", 00:28:58.359 "validity": 0.0 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "id": 4, 00:28:58.359 "state": "FREE", 00:28:58.359 "validity": 0.0 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "id": 5, 00:28:58.359 "state": "FREE", 00:28:58.359 "validity": 0.0 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "id": 6, 00:28:58.359 "state": "FREE", 00:28:58.359 "validity": 0.0 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "id": 7, 00:28:58.359 "state": "FREE", 00:28:58.359 "validity": 0.0 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "id": 8, 00:28:58.359 "state": "FREE", 00:28:58.359 "validity": 0.0 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "id": 9, 00:28:58.359 "state": "FREE", 00:28:58.359 "validity": 0.0 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "id": 10, 00:28:58.359 "state": "FREE", 00:28:58.359 "validity": 0.0 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "id": 11, 00:28:58.359 "state": "FREE", 00:28:58.359 "validity": 0.0 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "id": 12, 00:28:58.359 "state": "FREE", 00:28:58.359 "validity": 0.0 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "id": 13, 00:28:58.359 "state": "FREE", 00:28:58.359 "validity": 0.0 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "id": 14, 00:28:58.359 "state": "FREE", 00:28:58.359 "validity": 0.0 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "id": 15, 00:28:58.359 "state": "FREE", 00:28:58.359 "validity": 0.0 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "id": 16, 00:28:58.359 "state": "FREE", 00:28:58.359 "validity": 0.0 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "id": 17, 00:28:58.359 "state": "FREE", 00:28:58.359 "validity": 0.0 00:28:58.359 } 00:28:58.359 ], 00:28:58.359 "read-only": true 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "name": "cache_device", 00:28:58.359 "type": "bdev", 00:28:58.359 "chunks": [ 00:28:58.359 { 00:28:58.359 "id": 0, 00:28:58.359 "state": "INACTIVE", 00:28:58.359 "utilization": 0.0 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "id": 1, 00:28:58.359 "state": "CLOSED", 00:28:58.359 "utilization": 1.0 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "id": 2, 00:28:58.359 "state": "CLOSED", 00:28:58.359 "utilization": 1.0 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "id": 3, 00:28:58.359 "state": "OPEN", 00:28:58.359 "utilization": 0.001953125 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "id": 4, 00:28:58.359 "state": "OPEN", 00:28:58.359 "utilization": 0.0 00:28:58.359 } 00:28:58.359 ], 00:28:58.359 "read-only": true 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "name": "verbose_mode", 00:28:58.359 "value": true, 00:28:58.359 "unit": "", 00:28:58.359 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:58.359 }, 00:28:58.359 { 00:28:58.359 "name": "prep_upgrade_on_shutdown", 00:28:58.359 "value": false, 00:28:58.359 "unit": "", 00:28:58.359 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:58.359 } 00:28:58.359 ] 00:28:58.359 } 00:28:58.359 01:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:58.359 [2024-12-14 01:28:31.947837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.359 [2024-12-14 01:28:31.947877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:58.359 [2024-12-14 01:28:31.947887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:58.359 [2024-12-14 01:28:31.947894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.359 [2024-12-14 01:28:31.947911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.359 [2024-12-14 01:28:31.947918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:58.359 [2024-12-14 01:28:31.947924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:58.359 [2024-12-14 01:28:31.947929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.359 [2024-12-14 01:28:31.947945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.359 [2024-12-14 01:28:31.947951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:58.359 [2024-12-14 01:28:31.947957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:58.359 [2024-12-14 01:28:31.947962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.359 [2024-12-14 01:28:31.948008] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.161 ms, result 0 00:28:58.359 true 00:28:58.621 01:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:58.621 01:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:58.621 01:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:58.621 01:28:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:58.621 01:28:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:58.621 01:28:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:58.882 [2024-12-14 01:28:32.360144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.882 [2024-12-14 01:28:32.360177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:58.882 [2024-12-14 01:28:32.360185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:58.882 [2024-12-14 01:28:32.360191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.882 [2024-12-14 01:28:32.360208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.882 [2024-12-14 01:28:32.360215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:58.882 [2024-12-14 01:28:32.360221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:58.882 [2024-12-14 01:28:32.360227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.882 [2024-12-14 01:28:32.360241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.882 [2024-12-14 01:28:32.360247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:58.882 [2024-12-14 01:28:32.360253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:58.882 [2024-12-14 01:28:32.360258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.882 [2024-12-14 01:28:32.360298] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.147 ms, result 0 00:28:58.882 true 00:28:58.882 01:28:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:59.143 { 00:28:59.143 "name": "ftl", 00:28:59.143 "properties": [ 00:28:59.143 { 00:28:59.143 "name": "superblock_version", 00:28:59.143 "value": 5, 00:28:59.143 "read-only": true 00:28:59.143 }, 00:28:59.143 { 00:28:59.143 "name": "base_device", 00:28:59.143 "bands": [ 00:28:59.143 { 00:28:59.143 "id": 0, 00:28:59.144 "state": "FREE", 00:28:59.144 "validity": 0.0 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "id": 1, 00:28:59.144 "state": "FREE", 00:28:59.144 "validity": 0.0 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "id": 2, 00:28:59.144 "state": "FREE", 00:28:59.144 "validity": 0.0 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "id": 3, 00:28:59.144 "state": "FREE", 00:28:59.144 "validity": 0.0 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "id": 4, 00:28:59.144 "state": "FREE", 00:28:59.144 "validity": 0.0 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "id": 5, 00:28:59.144 "state": "FREE", 00:28:59.144 "validity": 0.0 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "id": 6, 00:28:59.144 "state": "FREE", 00:28:59.144 "validity": 0.0 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "id": 7, 00:28:59.144 "state": "FREE", 00:28:59.144 "validity": 0.0 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "id": 8, 00:28:59.144 "state": "FREE", 00:28:59.144 "validity": 0.0 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "id": 9, 00:28:59.144 "state": "FREE", 00:28:59.144 "validity": 0.0 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "id": 10, 00:28:59.144 "state": "FREE", 00:28:59.144 "validity": 0.0 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "id": 11, 00:28:59.144 "state": "FREE", 00:28:59.144 "validity": 0.0 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "id": 12, 00:28:59.144 "state": "FREE", 00:28:59.144 "validity": 0.0 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "id": 13, 00:28:59.144 "state": "FREE", 00:28:59.144 "validity": 0.0 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "id": 14, 00:28:59.144 "state": "FREE", 00:28:59.144 "validity": 0.0 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "id": 15, 00:28:59.144 "state": "FREE", 00:28:59.144 "validity": 0.0 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "id": 16, 00:28:59.144 "state": "FREE", 00:28:59.144 "validity": 0.0 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "id": 17, 00:28:59.144 "state": "FREE", 00:28:59.144 "validity": 0.0 00:28:59.144 } 00:28:59.144 ], 00:28:59.144 "read-only": true 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "name": "cache_device", 00:28:59.144 "type": "bdev", 00:28:59.144 "chunks": [ 00:28:59.144 { 00:28:59.144 "id": 0, 00:28:59.144 "state": "INACTIVE", 00:28:59.144 "utilization": 0.0 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "id": 1, 00:28:59.144 "state": "CLOSED", 00:28:59.144 "utilization": 1.0 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "id": 2, 00:28:59.144 "state": "CLOSED", 00:28:59.144 "utilization": 1.0 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "id": 3, 00:28:59.144 "state": "OPEN", 00:28:59.144 "utilization": 0.001953125 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "id": 4, 00:28:59.144 "state": "OPEN", 00:28:59.144 "utilization": 0.0 00:28:59.144 } 00:28:59.144 ], 00:28:59.144 "read-only": true 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "name": "verbose_mode", 00:28:59.144 "value": true, 00:28:59.144 "unit": "", 00:28:59.144 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:59.144 }, 00:28:59.144 { 00:28:59.144 "name": "prep_upgrade_on_shutdown", 00:28:59.144 "value": true, 00:28:59.144 "unit": "", 00:28:59.144 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:59.144 } 00:28:59.144 ] 00:28:59.144 } 00:28:59.144 01:28:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:59.144 01:28:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 95139 ]] 00:28:59.144 01:28:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 95139 00:28:59.144 01:28:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 95139 ']' 00:28:59.144 01:28:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 95139 00:28:59.144 01:28:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:59.144 01:28:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.144 01:28:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95139 00:28:59.144 killing process with pid 95139 00:28:59.144 01:28:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:59.144 01:28:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:59.144 01:28:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95139' 00:28:59.144 01:28:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 95139 00:28:59.144 01:28:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 95139 00:28:59.144 [2024-12-14 01:28:32.689874] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:59.144 [2024-12-14 01:28:32.693965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.144 [2024-12-14 01:28:32.693994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:59.144 [2024-12-14 01:28:32.694004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:59.144 [2024-12-14 01:28:32.694015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.144 [2024-12-14 01:28:32.694033] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:59.144 [2024-12-14 01:28:32.694415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.144 [2024-12-14 01:28:32.694436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:59.144 [2024-12-14 01:28:32.694442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.371 ms 00:28:59.144 [2024-12-14 01:28:32.694448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.156 [2024-12-14 01:28:40.949520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.156 [2024-12-14 01:28:40.949595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:09.156 [2024-12-14 01:28:40.949651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8255.023 ms 00:29:09.156 [2024-12-14 01:28:40.949662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.156 [2024-12-14 01:28:40.951413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.156 [2024-12-14 01:28:40.951456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:09.156 [2024-12-14 01:28:40.951469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.732 ms 00:29:09.156 [2024-12-14 01:28:40.951477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.156 [2024-12-14 01:28:40.952616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.156 [2024-12-14 01:28:40.952684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:09.156 [2024-12-14 01:28:40.952696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.104 ms 00:29:09.156 [2024-12-14 01:28:40.952711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.156 [2024-12-14 01:28:40.956017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.156 [2024-12-14 01:28:40.956186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:09.156 [2024-12-14 01:28:40.956267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.266 ms 00:29:09.156 [2024-12-14 01:28:40.956295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.156 [2024-12-14 01:28:40.959761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.156 [2024-12-14 01:28:40.959934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:09.156 [2024-12-14 01:28:40.960007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.412 ms 00:29:09.156 [2024-12-14 01:28:40.960041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.156 [2024-12-14 01:28:40.960142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.156 [2024-12-14 01:28:40.960167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:09.156 [2024-12-14 01:28:40.960189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:29:09.156 [2024-12-14 01:28:40.960209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.156 [2024-12-14 01:28:40.962969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.156 [2024-12-14 01:28:40.963147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:09.156 [2024-12-14 01:28:40.963215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.731 ms 00:29:09.156 [2024-12-14 01:28:40.963238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.156 [2024-12-14 01:28:40.965954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.156 [2024-12-14 01:28:40.966126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:09.156 [2024-12-14 01:28:40.966187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.664 ms 00:29:09.156 [2024-12-14 01:28:40.966199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.156 [2024-12-14 01:28:40.968576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.156 [2024-12-14 01:28:40.968643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:09.156 [2024-12-14 01:28:40.968655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.256 ms 00:29:09.156 [2024-12-14 01:28:40.968664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.156 [2024-12-14 01:28:40.971155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.156 [2024-12-14 01:28:40.971205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:09.156 [2024-12-14 01:28:40.971216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.411 ms 00:29:09.156 [2024-12-14 01:28:40.971224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.156 [2024-12-14 01:28:40.971265] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:09.156 [2024-12-14 01:28:40.971281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:09.156 [2024-12-14 01:28:40.971292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:09.156 [2024-12-14 01:28:40.971301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:09.156 [2024-12-14 01:28:40.971310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:09.156 [2024-12-14 01:28:40.971318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:09.156 [2024-12-14 01:28:40.971327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:09.156 [2024-12-14 01:28:40.971334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:09.156 [2024-12-14 01:28:40.971342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:09.156 [2024-12-14 01:28:40.971351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:09.156 [2024-12-14 01:28:40.971359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:09.156 [2024-12-14 01:28:40.971367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:09.156 [2024-12-14 01:28:40.971374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:09.156 [2024-12-14 01:28:40.971382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:09.156 [2024-12-14 01:28:40.971389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:09.156 [2024-12-14 01:28:40.971396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:09.156 [2024-12-14 01:28:40.971403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:09.156 [2024-12-14 01:28:40.971411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:09.156 [2024-12-14 01:28:40.971418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:09.156 [2024-12-14 01:28:40.971428] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:09.156 [2024-12-14 01:28:40.971436] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f8d2383c-aa2e-4805-87f5-b787e5398960 00:29:09.156 [2024-12-14 01:28:40.971444] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:09.156 [2024-12-14 01:28:40.971460] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:09.156 [2024-12-14 01:28:40.971467] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:09.156 [2024-12-14 01:28:40.971477] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:09.156 [2024-12-14 01:28:40.971484] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:09.156 [2024-12-14 01:28:40.971492] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:09.156 [2024-12-14 01:28:40.971500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:09.156 [2024-12-14 01:28:40.971506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:09.156 [2024-12-14 01:28:40.971514] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:09.156 [2024-12-14 01:28:40.971524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.156 [2024-12-14 01:28:40.971534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:09.157 [2024-12-14 01:28:40.971543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.260 ms 00:29:09.157 [2024-12-14 01:28:40.971551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.157 [2024-12-14 01:28:40.974133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.157 [2024-12-14 01:28:40.974178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:09.157 [2024-12-14 01:28:40.974190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.564 ms 00:29:09.157 [2024-12-14 01:28:40.974199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.157 [2024-12-14 01:28:40.974327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.157 [2024-12-14 01:28:40.974336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:09.157 [2024-12-14 01:28:40.974346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.102 ms 00:29:09.157 [2024-12-14 01:28:40.974354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.157 [2024-12-14 01:28:40.982551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.157 [2024-12-14 01:28:40.982611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:09.157 [2024-12-14 01:28:40.982641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.157 [2024-12-14 01:28:40.982650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.157 [2024-12-14 01:28:40.982688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.157 [2024-12-14 01:28:40.982697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:09.157 [2024-12-14 01:28:40.982706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.157 [2024-12-14 01:28:40.982714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.157 [2024-12-14 01:28:40.982793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.157 [2024-12-14 01:28:40.982805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:09.157 [2024-12-14 01:28:40.982814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.157 [2024-12-14 01:28:40.982822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.157 [2024-12-14 01:28:40.982840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.157 [2024-12-14 01:28:40.982849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:09.157 [2024-12-14 01:28:40.982862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.157 [2024-12-14 01:28:40.982870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.157 [2024-12-14 01:28:40.998263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.157 [2024-12-14 01:28:40.998324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:09.157 [2024-12-14 01:28:40.998337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.157 [2024-12-14 01:28:40.998346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.157 [2024-12-14 01:28:41.009602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.157 [2024-12-14 01:28:41.009677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:09.157 [2024-12-14 01:28:41.009690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.157 [2024-12-14 01:28:41.009698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.157 [2024-12-14 01:28:41.009770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.157 [2024-12-14 01:28:41.009789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:09.157 [2024-12-14 01:28:41.009799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.157 [2024-12-14 01:28:41.009809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.157 [2024-12-14 01:28:41.009854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.157 [2024-12-14 01:28:41.009865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:09.157 [2024-12-14 01:28:41.009874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.157 [2024-12-14 01:28:41.009883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.157 [2024-12-14 01:28:41.009953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.157 [2024-12-14 01:28:41.009964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:09.157 [2024-12-14 01:28:41.009977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.157 [2024-12-14 01:28:41.009992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.157 [2024-12-14 01:28:41.010023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.157 [2024-12-14 01:28:41.010033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:09.157 [2024-12-14 01:28:41.010042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.157 [2024-12-14 01:28:41.010050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.157 [2024-12-14 01:28:41.010095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.157 [2024-12-14 01:28:41.010115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:09.157 [2024-12-14 01:28:41.010127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.157 [2024-12-14 01:28:41.010135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.157 [2024-12-14 01:28:41.010188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.157 [2024-12-14 01:28:41.010199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:09.157 [2024-12-14 01:28:41.010208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.157 [2024-12-14 01:28:41.010217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.157 [2024-12-14 01:28:41.010355] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8316.322 ms, result 0 00:29:11.214 01:28:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:11.214 01:28:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:11.214 01:28:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:11.214 01:28:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:11.214 01:28:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:11.214 01:28:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95670 00:29:11.214 01:28:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:11.214 01:28:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95670 00:29:11.214 01:28:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 95670 ']' 00:29:11.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.214 01:28:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.214 01:28:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:11.214 01:28:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.214 01:28:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:11.214 01:28:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:11.214 01:28:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:11.214 [2024-12-14 01:28:44.577585] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:11.214 [2024-12-14 01:28:44.577747] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95670 ] 00:29:11.214 [2024-12-14 01:28:44.724755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.214 [2024-12-14 01:28:44.744658] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.475 [2024-12-14 01:28:45.016635] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:11.475 [2024-12-14 01:28:45.016710] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:11.737 [2024-12-14 01:28:45.164143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.737 [2024-12-14 01:28:45.164195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:11.737 [2024-12-14 01:28:45.164211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:11.737 [2024-12-14 01:28:45.164219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.737 [2024-12-14 01:28:45.164271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.737 [2024-12-14 01:28:45.164281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:11.737 [2024-12-14 01:28:45.164292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:29:11.737 [2024-12-14 01:28:45.164300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.737 [2024-12-14 01:28:45.164321] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:11.737 [2024-12-14 01:28:45.164783] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:11.737 [2024-12-14 01:28:45.164824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.737 [2024-12-14 01:28:45.164834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:11.737 [2024-12-14 01:28:45.164844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.508 ms 00:29:11.737 [2024-12-14 01:28:45.164852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.737 [2024-12-14 01:28:45.166120] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:11.737 [2024-12-14 01:28:45.168925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.737 [2024-12-14 01:28:45.168968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:11.737 [2024-12-14 01:28:45.168978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.808 ms 00:29:11.737 [2024-12-14 01:28:45.168986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.737 [2024-12-14 01:28:45.169067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.737 [2024-12-14 01:28:45.169078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:11.737 [2024-12-14 01:28:45.169087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:29:11.737 [2024-12-14 01:28:45.169095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.737 [2024-12-14 01:28:45.175023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.737 [2024-12-14 01:28:45.175059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:11.737 [2024-12-14 01:28:45.175069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.878 ms 00:29:11.737 [2024-12-14 01:28:45.175076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.737 [2024-12-14 01:28:45.175118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.737 [2024-12-14 01:28:45.175126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:11.737 [2024-12-14 01:28:45.175135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:11.737 [2024-12-14 01:28:45.175146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.737 [2024-12-14 01:28:45.175193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.737 [2024-12-14 01:28:45.175207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:11.737 [2024-12-14 01:28:45.175215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:11.737 [2024-12-14 01:28:45.175223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.737 [2024-12-14 01:28:45.175247] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:11.737 [2024-12-14 01:28:45.176792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.737 [2024-12-14 01:28:45.176824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:11.737 [2024-12-14 01:28:45.176834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.553 ms 00:29:11.737 [2024-12-14 01:28:45.176841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.737 [2024-12-14 01:28:45.176873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.737 [2024-12-14 01:28:45.176884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:11.737 [2024-12-14 01:28:45.176893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:11.737 [2024-12-14 01:28:45.176900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.737 [2024-12-14 01:28:45.176921] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:11.737 [2024-12-14 01:28:45.176942] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:11.737 [2024-12-14 01:28:45.176978] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:11.737 [2024-12-14 01:28:45.176998] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:11.737 [2024-12-14 01:28:45.177106] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:11.737 [2024-12-14 01:28:45.177124] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:11.737 [2024-12-14 01:28:45.177134] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:11.737 [2024-12-14 01:28:45.177147] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:11.737 [2024-12-14 01:28:45.177156] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:11.737 [2024-12-14 01:28:45.177164] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:11.737 [2024-12-14 01:28:45.177175] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:11.737 [2024-12-14 01:28:45.177186] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:11.737 [2024-12-14 01:28:45.177193] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:11.737 [2024-12-14 01:28:45.177202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.737 [2024-12-14 01:28:45.177209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:11.737 [2024-12-14 01:28:45.177218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.283 ms 00:29:11.737 [2024-12-14 01:28:45.177225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.737 [2024-12-14 01:28:45.177309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.737 [2024-12-14 01:28:45.177317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:11.737 [2024-12-14 01:28:45.177325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:29:11.737 [2024-12-14 01:28:45.177331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.737 [2024-12-14 01:28:45.177435] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:11.737 [2024-12-14 01:28:45.177460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:11.737 [2024-12-14 01:28:45.177469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:11.737 [2024-12-14 01:28:45.177480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.737 [2024-12-14 01:28:45.177489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:11.737 [2024-12-14 01:28:45.177497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:11.737 [2024-12-14 01:28:45.177505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:11.737 [2024-12-14 01:28:45.177513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:11.737 [2024-12-14 01:28:45.177522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:11.737 [2024-12-14 01:28:45.177529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.737 [2024-12-14 01:28:45.177538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:11.737 [2024-12-14 01:28:45.177545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:11.737 [2024-12-14 01:28:45.177553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.737 [2024-12-14 01:28:45.177561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:11.737 [2024-12-14 01:28:45.177568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:11.737 [2024-12-14 01:28:45.177575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.737 [2024-12-14 01:28:45.177591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:11.737 [2024-12-14 01:28:45.177599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:11.737 [2024-12-14 01:28:45.177655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.737 [2024-12-14 01:28:45.177665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:11.737 [2024-12-14 01:28:45.177673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:11.737 [2024-12-14 01:28:45.177680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:11.737 [2024-12-14 01:28:45.177688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:11.737 [2024-12-14 01:28:45.177695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:11.737 [2024-12-14 01:28:45.177703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:11.737 [2024-12-14 01:28:45.177710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:11.737 [2024-12-14 01:28:45.177718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:11.737 [2024-12-14 01:28:45.177725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:11.737 [2024-12-14 01:28:45.177732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:11.737 [2024-12-14 01:28:45.177740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:11.737 [2024-12-14 01:28:45.177748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:11.737 [2024-12-14 01:28:45.177756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:11.737 [2024-12-14 01:28:45.177765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:11.737 [2024-12-14 01:28:45.177774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.737 [2024-12-14 01:28:45.177781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:11.737 [2024-12-14 01:28:45.177791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:11.737 [2024-12-14 01:28:45.177798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.737 [2024-12-14 01:28:45.177806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:11.737 [2024-12-14 01:28:45.177813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:11.737 [2024-12-14 01:28:45.177820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.737 [2024-12-14 01:28:45.177827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:11.737 [2024-12-14 01:28:45.177834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:11.738 [2024-12-14 01:28:45.177840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.738 [2024-12-14 01:28:45.177847] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:11.738 [2024-12-14 01:28:45.177855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:11.738 [2024-12-14 01:28:45.177862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:11.738 [2024-12-14 01:28:45.177869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.738 [2024-12-14 01:28:45.177877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:11.738 [2024-12-14 01:28:45.177886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:11.738 [2024-12-14 01:28:45.177892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:11.738 [2024-12-14 01:28:45.177901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:11.738 [2024-12-14 01:28:45.177907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:11.738 [2024-12-14 01:28:45.177915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:11.738 [2024-12-14 01:28:45.177923] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:11.738 [2024-12-14 01:28:45.177932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:11.738 [2024-12-14 01:28:45.177941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:11.738 [2024-12-14 01:28:45.177949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:11.738 [2024-12-14 01:28:45.177956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:11.738 [2024-12-14 01:28:45.177963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:11.738 [2024-12-14 01:28:45.177970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:11.738 [2024-12-14 01:28:45.177977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:11.738 [2024-12-14 01:28:45.177984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:11.738 [2024-12-14 01:28:45.177991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:11.738 [2024-12-14 01:28:45.177998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:11.738 [2024-12-14 01:28:45.178007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:11.738 [2024-12-14 01:28:45.178014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:11.738 [2024-12-14 01:28:45.178021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:11.738 [2024-12-14 01:28:45.178029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:11.738 [2024-12-14 01:28:45.178036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:11.738 [2024-12-14 01:28:45.178043] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:11.738 [2024-12-14 01:28:45.178051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:11.738 [2024-12-14 01:28:45.178059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:11.738 [2024-12-14 01:28:45.178066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:11.738 [2024-12-14 01:28:45.178073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:11.738 [2024-12-14 01:28:45.178085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:11.738 [2024-12-14 01:28:45.178092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.738 [2024-12-14 01:28:45.178105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:11.738 [2024-12-14 01:28:45.178112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.728 ms 00:29:11.738 [2024-12-14 01:28:45.178119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.738 [2024-12-14 01:28:45.178171] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:11.738 [2024-12-14 01:28:45.178181] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:15.948 [2024-12-14 01:28:48.745527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.948 [2024-12-14 01:28:48.745656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:15.948 [2024-12-14 01:28:48.745675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3567.343 ms 00:29:15.948 [2024-12-14 01:28:48.745693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.948 [2024-12-14 01:28:48.758697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.948 [2024-12-14 01:28:48.758757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:15.948 [2024-12-14 01:28:48.758772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.886 ms 00:29:15.948 [2024-12-14 01:28:48.758782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.948 [2024-12-14 01:28:48.758865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.948 [2024-12-14 01:28:48.758877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:15.948 [2024-12-14 01:28:48.758897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:15.948 [2024-12-14 01:28:48.758907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.948 [2024-12-14 01:28:48.771599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.948 [2024-12-14 01:28:48.771669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:15.948 [2024-12-14 01:28:48.771682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.619 ms 00:29:15.948 [2024-12-14 01:28:48.771691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.948 [2024-12-14 01:28:48.771740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.948 [2024-12-14 01:28:48.771750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:15.948 [2024-12-14 01:28:48.771760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:15.948 [2024-12-14 01:28:48.771773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.948 [2024-12-14 01:28:48.772345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.948 [2024-12-14 01:28:48.772391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:15.948 [2024-12-14 01:28:48.772403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.515 ms 00:29:15.949 [2024-12-14 01:28:48.772413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.949 [2024-12-14 01:28:48.772464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.949 [2024-12-14 01:28:48.772475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:15.949 [2024-12-14 01:28:48.772485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:29:15.949 [2024-12-14 01:28:48.772495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.949 [2024-12-14 01:28:48.781107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.949 [2024-12-14 01:28:48.781153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:15.949 [2024-12-14 01:28:48.781175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.580 ms 00:29:15.949 [2024-12-14 01:28:48.781183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.949 [2024-12-14 01:28:48.792936] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:15.949 [2024-12-14 01:28:48.793000] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:15.949 [2024-12-14 01:28:48.793020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.949 [2024-12-14 01:28:48.793030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:15.949 [2024-12-14 01:28:48.793041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.733 ms 00:29:15.949 [2024-12-14 01:28:48.793049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.949 [2024-12-14 01:28:48.799089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.949 [2024-12-14 01:28:48.799150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:15.949 [2024-12-14 01:28:48.799167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.983 ms 00:29:15.949 [2024-12-14 01:28:48.799180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.949 [2024-12-14 01:28:48.801849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.949 [2024-12-14 01:28:48.801910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:15.949 [2024-12-14 01:28:48.801925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.603 ms 00:29:15.949 [2024-12-14 01:28:48.801936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.949 [2024-12-14 01:28:48.804712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.949 [2024-12-14 01:28:48.804762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:15.949 [2024-12-14 01:28:48.804775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.719 ms 00:29:15.949 [2024-12-14 01:28:48.804786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.949 [2024-12-14 01:28:48.805261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.949 [2024-12-14 01:28:48.805279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:15.949 [2024-12-14 01:28:48.805292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.372 ms 00:29:15.949 [2024-12-14 01:28:48.805303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.949 [2024-12-14 01:28:48.828721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.949 [2024-12-14 01:28:48.828783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:15.949 [2024-12-14 01:28:48.828806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.388 ms 00:29:15.949 [2024-12-14 01:28:48.828815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.949 [2024-12-14 01:28:48.836912] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:15.949 [2024-12-14 01:28:48.837873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.949 [2024-12-14 01:28:48.837921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:15.949 [2024-12-14 01:28:48.837933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.003 ms 00:29:15.949 [2024-12-14 01:28:48.837941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.949 [2024-12-14 01:28:48.838020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.949 [2024-12-14 01:28:48.838032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:15.949 [2024-12-14 01:28:48.838042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:15.949 [2024-12-14 01:28:48.838055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.949 [2024-12-14 01:28:48.838104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.949 [2024-12-14 01:28:48.838115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:15.949 [2024-12-14 01:28:48.838127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:15.949 [2024-12-14 01:28:48.838135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.949 [2024-12-14 01:28:48.838160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.949 [2024-12-14 01:28:48.838169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:15.949 [2024-12-14 01:28:48.838179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:15.949 [2024-12-14 01:28:48.838193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.949 [2024-12-14 01:28:48.838230] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:15.949 [2024-12-14 01:28:48.838242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.949 [2024-12-14 01:28:48.838251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:15.949 [2024-12-14 01:28:48.838261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:15.949 [2024-12-14 01:28:48.838274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.949 [2024-12-14 01:28:48.843298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.949 [2024-12-14 01:28:48.843349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:15.949 [2024-12-14 01:28:48.843360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.003 ms 00:29:15.949 [2024-12-14 01:28:48.843369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.949 [2024-12-14 01:28:48.843451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.949 [2024-12-14 01:28:48.843461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:15.949 [2024-12-14 01:28:48.843471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:29:15.949 [2024-12-14 01:28:48.843479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.949 [2024-12-14 01:28:48.844853] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3680.194 ms, result 0 00:29:15.949 [2024-12-14 01:28:48.858306] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.949 [2024-12-14 01:28:48.874337] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:15.949 [2024-12-14 01:28:48.882440] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:15.949 01:28:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:15.949 01:28:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:15.949 01:28:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:15.949 01:28:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:15.949 01:28:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:15.949 [2024-12-14 01:28:49.126518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.949 [2024-12-14 01:28:49.126579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:15.949 [2024-12-14 01:28:49.126594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:15.949 [2024-12-14 01:28:49.126603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.949 [2024-12-14 01:28:49.126643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.949 [2024-12-14 01:28:49.126653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:15.949 [2024-12-14 01:28:49.126665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:15.949 [2024-12-14 01:28:49.126674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.949 [2024-12-14 01:28:49.126696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.949 [2024-12-14 01:28:49.126705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:15.949 [2024-12-14 01:28:49.126714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:15.949 [2024-12-14 01:28:49.126722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.949 [2024-12-14 01:28:49.126787] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.264 ms, result 0 00:29:15.949 true 00:29:15.949 01:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:15.949 { 00:29:15.949 "name": "ftl", 00:29:15.949 "properties": [ 00:29:15.949 { 00:29:15.949 "name": "superblock_version", 00:29:15.949 "value": 5, 00:29:15.949 "read-only": true 00:29:15.949 }, 00:29:15.949 { 00:29:15.949 "name": "base_device", 00:29:15.949 "bands": [ 00:29:15.949 { 00:29:15.949 "id": 0, 00:29:15.949 "state": "CLOSED", 00:29:15.949 "validity": 1.0 00:29:15.949 }, 00:29:15.949 { 00:29:15.949 "id": 1, 00:29:15.949 "state": "CLOSED", 00:29:15.949 "validity": 1.0 00:29:15.949 }, 00:29:15.949 { 00:29:15.949 "id": 2, 00:29:15.949 "state": "CLOSED", 00:29:15.949 "validity": 0.007843137254901933 00:29:15.949 }, 00:29:15.949 { 00:29:15.949 "id": 3, 00:29:15.949 "state": "FREE", 00:29:15.949 "validity": 0.0 00:29:15.949 }, 00:29:15.949 { 00:29:15.949 "id": 4, 00:29:15.949 "state": "FREE", 00:29:15.949 "validity": 0.0 00:29:15.949 }, 00:29:15.949 { 00:29:15.949 "id": 5, 00:29:15.949 "state": "FREE", 00:29:15.949 "validity": 0.0 00:29:15.949 }, 00:29:15.949 { 00:29:15.949 "id": 6, 00:29:15.949 "state": "FREE", 00:29:15.949 "validity": 0.0 00:29:15.949 }, 00:29:15.949 { 00:29:15.949 "id": 7, 00:29:15.949 "state": "FREE", 00:29:15.949 "validity": 0.0 00:29:15.949 }, 00:29:15.949 { 00:29:15.949 "id": 8, 00:29:15.949 "state": "FREE", 00:29:15.949 "validity": 0.0 00:29:15.949 }, 00:29:15.949 { 00:29:15.949 "id": 9, 00:29:15.949 "state": "FREE", 00:29:15.949 "validity": 0.0 00:29:15.949 }, 00:29:15.949 { 00:29:15.949 "id": 10, 00:29:15.949 "state": "FREE", 00:29:15.949 "validity": 0.0 00:29:15.949 }, 00:29:15.950 { 00:29:15.950 "id": 11, 00:29:15.950 "state": "FREE", 00:29:15.950 "validity": 0.0 00:29:15.950 }, 00:29:15.950 { 00:29:15.950 "id": 12, 00:29:15.950 "state": "FREE", 00:29:15.950 "validity": 0.0 00:29:15.950 }, 00:29:15.950 { 00:29:15.950 "id": 13, 00:29:15.950 "state": "FREE", 00:29:15.950 "validity": 0.0 00:29:15.950 }, 00:29:15.950 { 00:29:15.950 "id": 14, 00:29:15.950 "state": "FREE", 00:29:15.950 "validity": 0.0 00:29:15.950 }, 00:29:15.950 { 00:29:15.950 "id": 15, 00:29:15.950 "state": "FREE", 00:29:15.950 "validity": 0.0 00:29:15.950 }, 00:29:15.950 { 00:29:15.950 "id": 16, 00:29:15.950 "state": "FREE", 00:29:15.950 "validity": 0.0 00:29:15.950 }, 00:29:15.950 { 00:29:15.950 "id": 17, 00:29:15.950 "state": "FREE", 00:29:15.950 "validity": 0.0 00:29:15.950 } 00:29:15.950 ], 00:29:15.950 "read-only": true 00:29:15.950 }, 00:29:15.950 { 00:29:15.950 "name": "cache_device", 00:29:15.950 "type": "bdev", 00:29:15.950 "chunks": [ 00:29:15.950 { 00:29:15.950 "id": 0, 00:29:15.950 "state": "INACTIVE", 00:29:15.950 "utilization": 0.0 00:29:15.950 }, 00:29:15.950 { 00:29:15.950 "id": 1, 00:29:15.950 "state": "OPEN", 00:29:15.950 "utilization": 0.0 00:29:15.950 }, 00:29:15.950 { 00:29:15.950 "id": 2, 00:29:15.950 "state": "OPEN", 00:29:15.950 "utilization": 0.0 00:29:15.950 }, 00:29:15.950 { 00:29:15.950 "id": 3, 00:29:15.950 "state": "FREE", 00:29:15.950 "utilization": 0.0 00:29:15.950 }, 00:29:15.950 { 00:29:15.950 "id": 4, 00:29:15.950 "state": "FREE", 00:29:15.950 "utilization": 0.0 00:29:15.950 } 00:29:15.950 ], 00:29:15.950 "read-only": true 00:29:15.950 }, 00:29:15.950 { 00:29:15.950 "name": "verbose_mode", 00:29:15.950 "value": true, 00:29:15.950 "unit": "", 00:29:15.950 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:15.950 }, 00:29:15.950 { 00:29:15.950 "name": "prep_upgrade_on_shutdown", 00:29:15.950 "value": false, 00:29:15.950 "unit": "", 00:29:15.950 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:15.950 } 00:29:15.950 ] 00:29:15.950 } 00:29:15.950 01:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:15.950 01:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:15.950 01:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:16.211 01:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:16.211 01:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:16.211 01:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:16.211 01:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:16.211 01:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:16.211 01:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:16.211 01:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:16.211 01:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:16.211 01:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:16.211 01:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:16.211 01:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:16.211 Validate MD5 checksum, iteration 1 00:29:16.211 01:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:16.211 01:28:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:16.211 01:28:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:16.211 01:28:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:16.211 01:28:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:16.211 01:28:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:16.211 01:28:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:16.473 [2024-12-14 01:28:49.879701] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:16.473 [2024-12-14 01:28:49.879852] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95742 ] 00:29:16.473 [2024-12-14 01:28:50.026783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.473 [2024-12-14 01:28:50.071853] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.855  [2024-12-14T01:28:52.401Z] Copying: 598/1024 [MB] (598 MBps) [2024-12-14T01:28:52.969Z] Copying: 1024/1024 [MB] (average 617 MBps) 00:29:19.357 00:29:19.357 01:28:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:19.357 01:28:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:21.903 01:28:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:21.903 01:28:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1596f94e2443ce6b92ca67631544f09c 00:29:21.903 01:28:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1596f94e2443ce6b92ca67631544f09c != \1\5\9\6\f\9\4\e\2\4\4\3\c\e\6\b\9\2\c\a\6\7\6\3\1\5\4\4\f\0\9\c ]] 00:29:21.903 01:28:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:21.903 01:28:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:21.903 01:28:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:21.903 Validate MD5 checksum, iteration 2 00:29:21.903 01:28:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:21.903 01:28:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:21.903 01:28:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:21.903 01:28:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:21.903 01:28:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:21.903 01:28:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:21.903 [2024-12-14 01:28:54.972487] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:21.903 [2024-12-14 01:28:54.972607] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95798 ] 00:29:21.903 [2024-12-14 01:28:55.112141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.903 [2024-12-14 01:28:55.133544] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.837  [2024-12-14T01:28:57.015Z] Copying: 678/1024 [MB] (678 MBps) [2024-12-14T01:28:57.584Z] Copying: 1024/1024 [MB] (average 654 MBps) 00:29:23.972 00:29:23.972 01:28:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:23.972 01:28:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0aa4ebeb69a10815e3299ed893fd3dcf 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0aa4ebeb69a10815e3299ed893fd3dcf != \0\a\a\4\e\b\e\b\6\9\a\1\0\8\1\5\e\3\2\9\9\e\d\8\9\3\f\d\3\d\c\f ]] 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 95670 ]] 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 95670 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95848 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95848 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 95848 ']' 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:26.517 01:28:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:26.517 [2024-12-14 01:28:59.604136] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:26.517 [2024-12-14 01:28:59.604262] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95848 ] 00:29:26.517 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 95670 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:26.517 [2024-12-14 01:28:59.749761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.517 [2024-12-14 01:28:59.778891] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.517 [2024-12-14 01:29:00.071928] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:26.517 [2024-12-14 01:29:00.071997] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:26.779 [2024-12-14 01:29:00.219690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.779 [2024-12-14 01:29:00.219753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:26.779 [2024-12-14 01:29:00.219770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:26.780 [2024-12-14 01:29:00.219779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.780 [2024-12-14 01:29:00.219840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.780 [2024-12-14 01:29:00.219855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:26.780 [2024-12-14 01:29:00.219867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:29:26.780 [2024-12-14 01:29:00.219876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.780 [2024-12-14 01:29:00.219900] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:26.780 [2024-12-14 01:29:00.220224] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:26.780 [2024-12-14 01:29:00.220258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.780 [2024-12-14 01:29:00.220267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:26.780 [2024-12-14 01:29:00.220277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.363 ms 00:29:26.780 [2024-12-14 01:29:00.220285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.780 [2024-12-14 01:29:00.221147] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:26.780 [2024-12-14 01:29:00.226924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.780 [2024-12-14 01:29:00.226988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:26.780 [2024-12-14 01:29:00.227001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.782 ms 00:29:26.780 [2024-12-14 01:29:00.227009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.780 [2024-12-14 01:29:00.228466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.780 [2024-12-14 01:29:00.228515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:26.780 [2024-12-14 01:29:00.228526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:29:26.780 [2024-12-14 01:29:00.228542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.780 [2024-12-14 01:29:00.228884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.780 [2024-12-14 01:29:00.228917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:26.780 [2024-12-14 01:29:00.228931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.271 ms 00:29:26.780 [2024-12-14 01:29:00.228943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.780 [2024-12-14 01:29:00.228985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.780 [2024-12-14 01:29:00.228994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:26.780 [2024-12-14 01:29:00.229004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:29:26.780 [2024-12-14 01:29:00.229011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.780 [2024-12-14 01:29:00.229055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.780 [2024-12-14 01:29:00.229069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:26.780 [2024-12-14 01:29:00.229080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:26.780 [2024-12-14 01:29:00.229089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.780 [2024-12-14 01:29:00.229112] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:26.780 [2024-12-14 01:29:00.230424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.780 [2024-12-14 01:29:00.230474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:26.780 [2024-12-14 01:29:00.230484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.318 ms 00:29:26.780 [2024-12-14 01:29:00.230492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.780 [2024-12-14 01:29:00.230531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.780 [2024-12-14 01:29:00.230548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:26.780 [2024-12-14 01:29:00.230556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:26.780 [2024-12-14 01:29:00.230564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.780 [2024-12-14 01:29:00.230585] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:26.780 [2024-12-14 01:29:00.230607] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:26.780 [2024-12-14 01:29:00.230670] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:26.780 [2024-12-14 01:29:00.230694] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:26.780 [2024-12-14 01:29:00.230802] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:26.780 [2024-12-14 01:29:00.230817] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:26.780 [2024-12-14 01:29:00.230837] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:26.780 [2024-12-14 01:29:00.230851] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:26.780 [2024-12-14 01:29:00.230875] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:26.780 [2024-12-14 01:29:00.230883] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:26.780 [2024-12-14 01:29:00.230891] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:26.780 [2024-12-14 01:29:00.230899] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:26.780 [2024-12-14 01:29:00.230907] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:26.780 [2024-12-14 01:29:00.230916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.780 [2024-12-14 01:29:00.230923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:26.780 [2024-12-14 01:29:00.230937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.334 ms 00:29:26.780 [2024-12-14 01:29:00.230944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.780 [2024-12-14 01:29:00.231044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.780 [2024-12-14 01:29:00.231052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:26.780 [2024-12-14 01:29:00.231063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.076 ms 00:29:26.780 [2024-12-14 01:29:00.231070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.780 [2024-12-14 01:29:00.231176] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:26.780 [2024-12-14 01:29:00.231186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:26.780 [2024-12-14 01:29:00.231194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:26.780 [2024-12-14 01:29:00.231208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:26.780 [2024-12-14 01:29:00.231216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:26.780 [2024-12-14 01:29:00.231223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:26.780 [2024-12-14 01:29:00.231230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:26.780 [2024-12-14 01:29:00.231237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:26.780 [2024-12-14 01:29:00.231244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:26.780 [2024-12-14 01:29:00.231250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:26.780 [2024-12-14 01:29:00.231257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:26.780 [2024-12-14 01:29:00.231264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:26.780 [2024-12-14 01:29:00.231271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:26.780 [2024-12-14 01:29:00.231278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:26.780 [2024-12-14 01:29:00.231293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:26.780 [2024-12-14 01:29:00.231305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:26.780 [2024-12-14 01:29:00.231317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:26.780 [2024-12-14 01:29:00.231329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:26.780 [2024-12-14 01:29:00.231336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:26.780 [2024-12-14 01:29:00.231343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:26.780 [2024-12-14 01:29:00.231349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:26.780 [2024-12-14 01:29:00.231356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:26.780 [2024-12-14 01:29:00.231363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:26.780 [2024-12-14 01:29:00.231370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:26.780 [2024-12-14 01:29:00.231377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:26.780 [2024-12-14 01:29:00.231383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:26.780 [2024-12-14 01:29:00.231390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:26.780 [2024-12-14 01:29:00.231397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:26.780 [2024-12-14 01:29:00.231403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:26.780 [2024-12-14 01:29:00.231410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:26.780 [2024-12-14 01:29:00.231419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:26.780 [2024-12-14 01:29:00.231427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:26.780 [2024-12-14 01:29:00.231434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:26.780 [2024-12-14 01:29:00.231441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:26.780 [2024-12-14 01:29:00.231448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:26.780 [2024-12-14 01:29:00.231455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:26.780 [2024-12-14 01:29:00.231461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:26.780 [2024-12-14 01:29:00.231468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:26.780 [2024-12-14 01:29:00.231474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:26.780 [2024-12-14 01:29:00.231483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:26.780 [2024-12-14 01:29:00.231495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:26.780 [2024-12-14 01:29:00.231507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:26.780 [2024-12-14 01:29:00.231519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:26.780 [2024-12-14 01:29:00.231530] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:26.780 [2024-12-14 01:29:00.231542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:26.780 [2024-12-14 01:29:00.231554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:26.781 [2024-12-14 01:29:00.231564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:26.781 [2024-12-14 01:29:00.231573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:26.781 [2024-12-14 01:29:00.231581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:26.781 [2024-12-14 01:29:00.231589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:26.781 [2024-12-14 01:29:00.231597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:26.781 [2024-12-14 01:29:00.231603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:26.781 [2024-12-14 01:29:00.231613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:26.781 [2024-12-14 01:29:00.231639] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:26.781 [2024-12-14 01:29:00.231654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:26.781 [2024-12-14 01:29:00.231664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:26.781 [2024-12-14 01:29:00.231672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:26.781 [2024-12-14 01:29:00.231679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:26.781 [2024-12-14 01:29:00.231687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:26.781 [2024-12-14 01:29:00.231695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:26.781 [2024-12-14 01:29:00.231703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:26.781 [2024-12-14 01:29:00.231711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:26.781 [2024-12-14 01:29:00.231720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:26.781 [2024-12-14 01:29:00.231728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:26.781 [2024-12-14 01:29:00.231735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:26.781 [2024-12-14 01:29:00.231743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:26.781 [2024-12-14 01:29:00.231751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:26.781 [2024-12-14 01:29:00.231758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:26.781 [2024-12-14 01:29:00.231766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:26.781 [2024-12-14 01:29:00.231774] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:26.781 [2024-12-14 01:29:00.231782] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:26.781 [2024-12-14 01:29:00.231790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:26.781 [2024-12-14 01:29:00.231803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:26.781 [2024-12-14 01:29:00.231811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:26.781 [2024-12-14 01:29:00.231822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:26.781 [2024-12-14 01:29:00.231834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.781 [2024-12-14 01:29:00.231850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:26.781 [2024-12-14 01:29:00.231863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.726 ms 00:29:26.781 [2024-12-14 01:29:00.231878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.781 [2024-12-14 01:29:00.243503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.781 [2024-12-14 01:29:00.243550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:26.781 [2024-12-14 01:29:00.243561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.551 ms 00:29:26.781 [2024-12-14 01:29:00.243571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.781 [2024-12-14 01:29:00.243616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.781 [2024-12-14 01:29:00.243656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:26.781 [2024-12-14 01:29:00.243666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:26.781 [2024-12-14 01:29:00.243676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.781 [2024-12-14 01:29:00.256270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.781 [2024-12-14 01:29:00.256318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:26.781 [2024-12-14 01:29:00.256330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.522 ms 00:29:26.781 [2024-12-14 01:29:00.256339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.781 [2024-12-14 01:29:00.256381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.781 [2024-12-14 01:29:00.256390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:26.781 [2024-12-14 01:29:00.256403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:26.781 [2024-12-14 01:29:00.256410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.781 [2024-12-14 01:29:00.256541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.781 [2024-12-14 01:29:00.256575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:26.781 [2024-12-14 01:29:00.256586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:29:26.781 [2024-12-14 01:29:00.256594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.781 [2024-12-14 01:29:00.256661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.781 [2024-12-14 01:29:00.256673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:26.781 [2024-12-14 01:29:00.256687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:29:26.781 [2024-12-14 01:29:00.256709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.781 [2024-12-14 01:29:00.264479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.781 [2024-12-14 01:29:00.264523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:26.781 [2024-12-14 01:29:00.264533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.745 ms 00:29:26.781 [2024-12-14 01:29:00.264542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.781 [2024-12-14 01:29:00.264676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.781 [2024-12-14 01:29:00.264699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:26.781 [2024-12-14 01:29:00.264712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:26.781 [2024-12-14 01:29:00.264720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.781 [2024-12-14 01:29:00.279737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.781 [2024-12-14 01:29:00.279789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:26.781 [2024-12-14 01:29:00.279813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.996 ms 00:29:26.781 [2024-12-14 01:29:00.279821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.781 [2024-12-14 01:29:00.281210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.781 [2024-12-14 01:29:00.281250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:26.781 [2024-12-14 01:29:00.281264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.264 ms 00:29:26.781 [2024-12-14 01:29:00.281273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.781 [2024-12-14 01:29:00.299703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.781 [2024-12-14 01:29:00.299758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:26.781 [2024-12-14 01:29:00.299775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.383 ms 00:29:26.781 [2024-12-14 01:29:00.299784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.781 [2024-12-14 01:29:00.299927] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:26.781 [2024-12-14 01:29:00.300044] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:26.781 [2024-12-14 01:29:00.300136] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:26.781 [2024-12-14 01:29:00.300223] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:26.781 [2024-12-14 01:29:00.300240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.781 [2024-12-14 01:29:00.300249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:26.781 [2024-12-14 01:29:00.300268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.409 ms 00:29:26.781 [2024-12-14 01:29:00.300276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.781 [2024-12-14 01:29:00.300348] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:26.781 [2024-12-14 01:29:00.300363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.781 [2024-12-14 01:29:00.300371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:26.781 [2024-12-14 01:29:00.300385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:26.781 [2024-12-14 01:29:00.300392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.781 [2024-12-14 01:29:00.303291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.781 [2024-12-14 01:29:00.303343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:26.781 [2024-12-14 01:29:00.303355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.874 ms 00:29:26.781 [2024-12-14 01:29:00.303367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.781 [2024-12-14 01:29:00.304207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.781 [2024-12-14 01:29:00.304243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:26.781 [2024-12-14 01:29:00.304253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:29:26.781 [2024-12-14 01:29:00.304262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.781 [2024-12-14 01:29:00.304358] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:29:26.781 [2024-12-14 01:29:00.304528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.781 [2024-12-14 01:29:00.304551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:26.781 [2024-12-14 01:29:00.304566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.171 ms 00:29:26.782 [2024-12-14 01:29:00.304576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.352 [2024-12-14 01:29:00.870592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.352 [2024-12-14 01:29:00.870676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:27.352 [2024-12-14 01:29:00.870691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 565.632 ms 00:29:27.352 [2024-12-14 01:29:00.870699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.352 [2024-12-14 01:29:00.872061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.352 [2024-12-14 01:29:00.872100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:27.352 [2024-12-14 01:29:00.872124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.998 ms 00:29:27.352 [2024-12-14 01:29:00.872133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.352 [2024-12-14 01:29:00.872519] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:29:27.352 [2024-12-14 01:29:00.872547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.352 [2024-12-14 01:29:00.872556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:27.352 [2024-12-14 01:29:00.872566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.385 ms 00:29:27.352 [2024-12-14 01:29:00.872573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.352 [2024-12-14 01:29:00.872666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.352 [2024-12-14 01:29:00.872685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:27.352 [2024-12-14 01:29:00.872695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:27.352 [2024-12-14 01:29:00.872703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.352 [2024-12-14 01:29:00.872738] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 568.392 ms, result 0 00:29:27.352 [2024-12-14 01:29:00.872781] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:29:27.352 [2024-12-14 01:29:00.872862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.352 [2024-12-14 01:29:00.872886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:27.352 [2024-12-14 01:29:00.872895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.083 ms 00:29:27.352 [2024-12-14 01:29:00.872903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.924 [2024-12-14 01:29:01.441961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.924 [2024-12-14 01:29:01.442021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:27.924 [2024-12-14 01:29:01.442034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 568.633 ms 00:29:27.924 [2024-12-14 01:29:01.442042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.924 [2024-12-14 01:29:01.443405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.924 [2024-12-14 01:29:01.443439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:27.924 [2024-12-14 01:29:01.443450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.037 ms 00:29:27.924 [2024-12-14 01:29:01.443458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.924 [2024-12-14 01:29:01.444098] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:29:27.924 [2024-12-14 01:29:01.444131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.924 [2024-12-14 01:29:01.444139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:27.924 [2024-12-14 01:29:01.444147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.646 ms 00:29:27.924 [2024-12-14 01:29:01.444154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.924 [2024-12-14 01:29:01.444183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.924 [2024-12-14 01:29:01.444192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:27.924 [2024-12-14 01:29:01.444200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:27.924 [2024-12-14 01:29:01.444206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.924 [2024-12-14 01:29:01.444241] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 571.459 ms, result 0 00:29:27.924 [2024-12-14 01:29:01.444280] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:27.924 [2024-12-14 01:29:01.444290] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:27.924 [2024-12-14 01:29:01.444300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.924 [2024-12-14 01:29:01.444313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:27.924 [2024-12-14 01:29:01.444321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1139.976 ms 00:29:27.924 [2024-12-14 01:29:01.444331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.924 [2024-12-14 01:29:01.444360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.924 [2024-12-14 01:29:01.444368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:27.924 [2024-12-14 01:29:01.444376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:27.924 [2024-12-14 01:29:01.444383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.924 [2024-12-14 01:29:01.452234] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:27.924 [2024-12-14 01:29:01.452333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.924 [2024-12-14 01:29:01.452346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:27.924 [2024-12-14 01:29:01.452355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.935 ms 00:29:27.924 [2024-12-14 01:29:01.452362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.924 [2024-12-14 01:29:01.453056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.924 [2024-12-14 01:29:01.453080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:29:27.924 [2024-12-14 01:29:01.453094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.626 ms 00:29:27.924 [2024-12-14 01:29:01.453106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.924 [2024-12-14 01:29:01.455376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.924 [2024-12-14 01:29:01.455405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:27.924 [2024-12-14 01:29:01.455420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.253 ms 00:29:27.924 [2024-12-14 01:29:01.455430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.924 [2024-12-14 01:29:01.455468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.925 [2024-12-14 01:29:01.455476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:29:27.925 [2024-12-14 01:29:01.455484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:27.925 [2024-12-14 01:29:01.455491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.925 [2024-12-14 01:29:01.455593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.925 [2024-12-14 01:29:01.455603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:27.925 [2024-12-14 01:29:01.455613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:27.925 [2024-12-14 01:29:01.455630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.925 [2024-12-14 01:29:01.455650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.925 [2024-12-14 01:29:01.455662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:27.925 [2024-12-14 01:29:01.455669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:27.925 [2024-12-14 01:29:01.455676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.925 [2024-12-14 01:29:01.455708] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:27.925 [2024-12-14 01:29:01.455718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.925 [2024-12-14 01:29:01.455726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:27.925 [2024-12-14 01:29:01.455733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:29:27.925 [2024-12-14 01:29:01.455744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.925 [2024-12-14 01:29:01.455808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.925 [2024-12-14 01:29:01.455818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:27.925 [2024-12-14 01:29:01.455827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:29:27.925 [2024-12-14 01:29:01.455834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.925 [2024-12-14 01:29:01.456685] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1236.581 ms, result 0 00:29:27.925 [2024-12-14 01:29:01.469008] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.925 [2024-12-14 01:29:01.485021] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:27.925 [2024-12-14 01:29:01.493130] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:28.497 Validate MD5 checksum, iteration 1 00:29:28.497 01:29:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.497 01:29:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:28.497 01:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:28.497 01:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:28.497 01:29:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:28.497 01:29:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:28.497 01:29:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:28.497 01:29:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:28.497 01:29:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:28.497 01:29:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:28.497 01:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:28.497 01:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:28.497 01:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:28.497 01:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:28.497 01:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:28.758 [2024-12-14 01:29:02.135072] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:28.758 [2024-12-14 01:29:02.135190] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95883 ] 00:29:28.758 [2024-12-14 01:29:02.280476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.758 [2024-12-14 01:29:02.298674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.171  [2024-12-14T01:29:04.355Z] Copying: 662/1024 [MB] (662 MBps) [2024-12-14T01:29:05.299Z] Copying: 1024/1024 [MB] (average 623 MBps) 00:29:31.687 00:29:31.687 01:29:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:31.687 01:29:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:34.230 01:29:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:34.230 Validate MD5 checksum, iteration 2 00:29:34.230 01:29:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1596f94e2443ce6b92ca67631544f09c 00:29:34.230 01:29:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1596f94e2443ce6b92ca67631544f09c != \1\5\9\6\f\9\4\e\2\4\4\3\c\e\6\b\9\2\c\a\6\7\6\3\1\5\4\4\f\0\9\c ]] 00:29:34.230 01:29:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:34.230 01:29:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:34.230 01:29:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:34.230 01:29:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:34.230 01:29:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:34.230 01:29:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:34.230 01:29:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:34.230 01:29:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:34.230 01:29:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:34.230 [2024-12-14 01:29:07.370154] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:34.230 [2024-12-14 01:29:07.370311] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95944 ] 00:29:34.230 [2024-12-14 01:29:07.518300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.230 [2024-12-14 01:29:07.547640] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.617  [2024-12-14T01:29:09.801Z] Copying: 616/1024 [MB] (616 MBps) [2024-12-14T01:29:10.374Z] Copying: 1024/1024 [MB] (average 597 MBps) 00:29:36.762 00:29:36.762 01:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:36.762 01:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:38.677 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:38.677 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0aa4ebeb69a10815e3299ed893fd3dcf 00:29:38.677 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0aa4ebeb69a10815e3299ed893fd3dcf != \0\a\a\4\e\b\e\b\6\9\a\1\0\8\1\5\e\3\2\9\9\e\d\8\9\3\f\d\3\d\c\f ]] 00:29:38.677 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:38.677 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:38.677 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:38.677 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:38.677 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:38.677 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:38.939 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:38.939 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:38.939 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:38.939 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:38.939 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 95848 ]] 00:29:38.939 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 95848 00:29:38.939 01:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 95848 ']' 00:29:38.939 01:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 95848 00:29:38.939 01:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:38.939 01:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:38.939 01:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95848 00:29:38.939 killing process with pid 95848 00:29:38.939 01:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:38.939 01:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:38.939 01:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95848' 00:29:38.939 01:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 95848 00:29:38.939 01:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 95848 00:29:38.939 [2024-12-14 01:29:12.492687] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:38.939 [2024-12-14 01:29:12.495943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.939 [2024-12-14 01:29:12.495976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:38.939 [2024-12-14 01:29:12.495986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:38.939 [2024-12-14 01:29:12.495992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.939 [2024-12-14 01:29:12.496009] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:38.939 [2024-12-14 01:29:12.496388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.939 [2024-12-14 01:29:12.496408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:38.939 [2024-12-14 01:29:12.496419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.368 ms 00:29:38.939 [2024-12-14 01:29:12.496425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.939 [2024-12-14 01:29:12.496629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.939 [2024-12-14 01:29:12.496642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:38.939 [2024-12-14 01:29:12.496649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.177 ms 00:29:38.939 [2024-12-14 01:29:12.496655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.939 [2024-12-14 01:29:12.497769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.939 [2024-12-14 01:29:12.497791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:38.939 [2024-12-14 01:29:12.497798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.101 ms 00:29:38.939 [2024-12-14 01:29:12.497808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.939 [2024-12-14 01:29:12.498656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.939 [2024-12-14 01:29:12.498680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:38.939 [2024-12-14 01:29:12.498688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.824 ms 00:29:38.939 [2024-12-14 01:29:12.498694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.939 [2024-12-14 01:29:12.500024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.939 [2024-12-14 01:29:12.500053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:38.939 [2024-12-14 01:29:12.500060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.302 ms 00:29:38.939 [2024-12-14 01:29:12.500069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.939 [2024-12-14 01:29:12.501170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.939 [2024-12-14 01:29:12.501199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:38.939 [2024-12-14 01:29:12.501206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.074 ms 00:29:38.939 [2024-12-14 01:29:12.501212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.939 [2024-12-14 01:29:12.501271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.939 [2024-12-14 01:29:12.501279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:38.939 [2024-12-14 01:29:12.501285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:29:38.939 [2024-12-14 01:29:12.501295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.939 [2024-12-14 01:29:12.503147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.939 [2024-12-14 01:29:12.503183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:38.939 [2024-12-14 01:29:12.503189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.840 ms 00:29:38.939 [2024-12-14 01:29:12.503195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.940 [2024-12-14 01:29:12.505278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.940 [2024-12-14 01:29:12.505306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:38.940 [2024-12-14 01:29:12.505312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.058 ms 00:29:38.940 [2024-12-14 01:29:12.505317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.940 [2024-12-14 01:29:12.506867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.940 [2024-12-14 01:29:12.506894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:38.940 [2024-12-14 01:29:12.506901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.524 ms 00:29:38.940 [2024-12-14 01:29:12.506907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.940 [2024-12-14 01:29:12.508784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.940 [2024-12-14 01:29:12.508810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:38.940 [2024-12-14 01:29:12.508817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.831 ms 00:29:38.940 [2024-12-14 01:29:12.508822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.940 [2024-12-14 01:29:12.508847] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:38.940 [2024-12-14 01:29:12.508858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:38.940 [2024-12-14 01:29:12.508866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:38.940 [2024-12-14 01:29:12.508873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:38.940 [2024-12-14 01:29:12.508879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:38.940 [2024-12-14 01:29:12.508885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:38.940 [2024-12-14 01:29:12.508891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:38.940 [2024-12-14 01:29:12.508896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:38.940 [2024-12-14 01:29:12.508903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:38.940 [2024-12-14 01:29:12.508909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:38.940 [2024-12-14 01:29:12.508914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:38.940 [2024-12-14 01:29:12.508920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:38.940 [2024-12-14 01:29:12.508926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:38.940 [2024-12-14 01:29:12.508931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:38.940 [2024-12-14 01:29:12.508937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:38.940 [2024-12-14 01:29:12.508943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:38.940 [2024-12-14 01:29:12.508948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:38.940 [2024-12-14 01:29:12.508954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:38.940 [2024-12-14 01:29:12.508960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:38.940 [2024-12-14 01:29:12.508967] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:38.940 [2024-12-14 01:29:12.508973] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f8d2383c-aa2e-4805-87f5-b787e5398960 00:29:38.940 [2024-12-14 01:29:12.508978] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:38.940 [2024-12-14 01:29:12.508984] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:38.940 [2024-12-14 01:29:12.508989] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:38.940 [2024-12-14 01:29:12.508995] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:38.940 [2024-12-14 01:29:12.509001] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:38.940 [2024-12-14 01:29:12.509006] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:38.940 [2024-12-14 01:29:12.509015] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:38.940 [2024-12-14 01:29:12.509021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:38.940 [2024-12-14 01:29:12.509026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:38.940 [2024-12-14 01:29:12.509031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.940 [2024-12-14 01:29:12.509037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:38.940 [2024-12-14 01:29:12.509043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.184 ms 00:29:38.940 [2024-12-14 01:29:12.509052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.940 [2024-12-14 01:29:12.510286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.940 [2024-12-14 01:29:12.510313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:38.940 [2024-12-14 01:29:12.510320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.222 ms 00:29:38.940 [2024-12-14 01:29:12.510326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.940 [2024-12-14 01:29:12.510402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.940 [2024-12-14 01:29:12.510409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:38.940 [2024-12-14 01:29:12.510416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:29:38.940 [2024-12-14 01:29:12.510421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.940 [2024-12-14 01:29:12.514915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:38.940 [2024-12-14 01:29:12.514946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:38.940 [2024-12-14 01:29:12.514953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:38.940 [2024-12-14 01:29:12.514961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.940 [2024-12-14 01:29:12.514984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:38.940 [2024-12-14 01:29:12.514990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:38.940 [2024-12-14 01:29:12.514996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:38.940 [2024-12-14 01:29:12.515001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.940 [2024-12-14 01:29:12.515052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:38.940 [2024-12-14 01:29:12.515062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:38.940 [2024-12-14 01:29:12.515068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:38.940 [2024-12-14 01:29:12.515074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.940 [2024-12-14 01:29:12.515092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:38.940 [2024-12-14 01:29:12.515098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:38.940 [2024-12-14 01:29:12.515103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:38.940 [2024-12-14 01:29:12.515108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.940 [2024-12-14 01:29:12.523071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:38.940 [2024-12-14 01:29:12.523105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:38.940 [2024-12-14 01:29:12.523113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:38.940 [2024-12-14 01:29:12.523120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.940 [2024-12-14 01:29:12.529163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:38.940 [2024-12-14 01:29:12.529194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:38.940 [2024-12-14 01:29:12.529201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:38.940 [2024-12-14 01:29:12.529207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.940 [2024-12-14 01:29:12.529241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:38.940 [2024-12-14 01:29:12.529248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:38.940 [2024-12-14 01:29:12.529254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:38.940 [2024-12-14 01:29:12.529260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.940 [2024-12-14 01:29:12.529306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:38.940 [2024-12-14 01:29:12.529315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:38.940 [2024-12-14 01:29:12.529321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:38.940 [2024-12-14 01:29:12.529326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.940 [2024-12-14 01:29:12.529378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:38.940 [2024-12-14 01:29:12.529385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:38.940 [2024-12-14 01:29:12.529391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:38.940 [2024-12-14 01:29:12.529397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.940 [2024-12-14 01:29:12.529420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:38.940 [2024-12-14 01:29:12.529427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:38.940 [2024-12-14 01:29:12.529435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:38.940 [2024-12-14 01:29:12.529441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.940 [2024-12-14 01:29:12.529469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:38.940 [2024-12-14 01:29:12.529477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:38.940 [2024-12-14 01:29:12.529483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:38.940 [2024-12-14 01:29:12.529489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.940 [2024-12-14 01:29:12.529520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:38.940 [2024-12-14 01:29:12.529529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:38.940 [2024-12-14 01:29:12.529536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:38.940 [2024-12-14 01:29:12.529541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.940 [2024-12-14 01:29:12.529666] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 33.667 ms, result 0 00:29:39.201 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:39.201 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:39.201 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:39.201 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:39.201 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:39.201 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:39.201 Remove shared memory files 00:29:39.201 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:39.201 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:39.201 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:39.201 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:39.201 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid95670 00:29:39.201 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:39.201 01:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:39.201 00:29:39.201 real 1m14.461s 00:29:39.201 user 1m39.478s 00:29:39.201 sys 0m21.271s 00:29:39.201 01:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:39.201 ************************************ 00:29:39.201 END TEST ftl_upgrade_shutdown 00:29:39.201 ************************************ 00:29:39.201 01:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:39.201 01:29:12 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:29:39.201 01:29:12 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:39.201 01:29:12 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:29:39.201 01:29:12 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:39.201 01:29:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:39.201 ************************************ 00:29:39.201 START TEST ftl_restore_fast 00:29:39.201 ************************************ 00:29:39.201 01:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:39.463 * Looking for test storage... 00:29:39.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lcov --version 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:39.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.463 --rc genhtml_branch_coverage=1 00:29:39.463 --rc genhtml_function_coverage=1 00:29:39.463 --rc genhtml_legend=1 00:29:39.463 --rc geninfo_all_blocks=1 00:29:39.463 --rc geninfo_unexecuted_blocks=1 00:29:39.463 00:29:39.463 ' 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:39.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.463 --rc genhtml_branch_coverage=1 00:29:39.463 --rc genhtml_function_coverage=1 00:29:39.463 --rc genhtml_legend=1 00:29:39.463 --rc geninfo_all_blocks=1 00:29:39.463 --rc geninfo_unexecuted_blocks=1 00:29:39.463 00:29:39.463 ' 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:39.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.463 --rc genhtml_branch_coverage=1 00:29:39.463 --rc genhtml_function_coverage=1 00:29:39.463 --rc genhtml_legend=1 00:29:39.463 --rc geninfo_all_blocks=1 00:29:39.463 --rc geninfo_unexecuted_blocks=1 00:29:39.463 00:29:39.463 ' 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:39.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.463 --rc genhtml_branch_coverage=1 00:29:39.463 --rc genhtml_function_coverage=1 00:29:39.463 --rc genhtml_legend=1 00:29:39.463 --rc geninfo_all_blocks=1 00:29:39.463 --rc geninfo_unexecuted_blocks=1 00:29:39.463 00:29:39.463 ' 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.RSAduVQ4Fi 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:29:39.463 01:29:12 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:29:39.464 01:29:12 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:29:39.464 01:29:12 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:39.464 01:29:12 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=96072 00:29:39.464 01:29:12 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 96072 00:29:39.464 01:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 96072 ']' 00:29:39.464 01:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:39.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:39.464 01:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:39.464 01:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:39.464 01:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:39.464 01:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:29:39.464 01:29:12 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:39.464 [2024-12-14 01:29:13.014458] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:39.464 [2024-12-14 01:29:13.014590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96072 ] 00:29:39.724 [2024-12-14 01:29:13.161523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.724 [2024-12-14 01:29:13.192298] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.296 01:29:13 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:40.296 01:29:13 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:29:40.296 01:29:13 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:40.296 01:29:13 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:29:40.296 01:29:13 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:40.296 01:29:13 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:29:40.296 01:29:13 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:29:40.296 01:29:13 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:40.868 01:29:14 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:40.868 01:29:14 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:29:40.868 01:29:14 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:40.868 01:29:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:29:40.868 01:29:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:40.868 01:29:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:40.868 01:29:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:40.868 01:29:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:40.868 01:29:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:40.868 { 00:29:40.868 "name": "nvme0n1", 00:29:40.868 "aliases": [ 00:29:40.868 "b5f593e6-22ad-4869-b946-0b7ec9786859" 00:29:40.868 ], 00:29:40.868 "product_name": "NVMe disk", 00:29:40.868 "block_size": 4096, 00:29:40.868 "num_blocks": 1310720, 00:29:40.868 "uuid": "b5f593e6-22ad-4869-b946-0b7ec9786859", 00:29:40.868 "numa_id": -1, 00:29:40.868 "assigned_rate_limits": { 00:29:40.868 "rw_ios_per_sec": 0, 00:29:40.868 "rw_mbytes_per_sec": 0, 00:29:40.868 "r_mbytes_per_sec": 0, 00:29:40.868 "w_mbytes_per_sec": 0 00:29:40.868 }, 00:29:40.868 "claimed": true, 00:29:40.868 "claim_type": "read_many_write_one", 00:29:40.868 "zoned": false, 00:29:40.868 "supported_io_types": { 00:29:40.868 "read": true, 00:29:40.868 "write": true, 00:29:40.868 "unmap": true, 00:29:40.868 "flush": true, 00:29:40.868 "reset": true, 00:29:40.868 "nvme_admin": true, 00:29:40.868 "nvme_io": true, 00:29:40.868 "nvme_io_md": false, 00:29:40.868 "write_zeroes": true, 00:29:40.868 "zcopy": false, 00:29:40.868 "get_zone_info": false, 00:29:40.868 "zone_management": false, 00:29:40.868 "zone_append": false, 00:29:40.868 "compare": true, 00:29:40.868 "compare_and_write": false, 00:29:40.868 "abort": true, 00:29:40.868 "seek_hole": false, 00:29:40.868 "seek_data": false, 00:29:40.869 "copy": true, 00:29:40.869 "nvme_iov_md": false 00:29:40.869 }, 00:29:40.869 "driver_specific": { 00:29:40.869 "nvme": [ 00:29:40.869 { 00:29:40.869 "pci_address": "0000:00:11.0", 00:29:40.869 "trid": { 00:29:40.869 "trtype": "PCIe", 00:29:40.869 "traddr": "0000:00:11.0" 00:29:40.869 }, 00:29:40.869 "ctrlr_data": { 00:29:40.869 "cntlid": 0, 00:29:40.869 "vendor_id": "0x1b36", 00:29:40.869 "model_number": "QEMU NVMe Ctrl", 00:29:40.869 "serial_number": "12341", 00:29:40.869 "firmware_revision": "8.0.0", 00:29:40.869 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:40.869 "oacs": { 00:29:40.869 "security": 0, 00:29:40.869 "format": 1, 00:29:40.869 "firmware": 0, 00:29:40.869 "ns_manage": 1 00:29:40.869 }, 00:29:40.869 "multi_ctrlr": false, 00:29:40.869 "ana_reporting": false 00:29:40.869 }, 00:29:40.869 "vs": { 00:29:40.869 "nvme_version": "1.4" 00:29:40.869 }, 00:29:40.869 "ns_data": { 00:29:40.869 "id": 1, 00:29:40.869 "can_share": false 00:29:40.869 } 00:29:40.869 } 00:29:40.869 ], 00:29:40.869 "mp_policy": "active_passive" 00:29:40.869 } 00:29:40.869 } 00:29:40.869 ]' 00:29:40.869 01:29:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:40.869 01:29:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:40.869 01:29:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:40.869 01:29:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:40.869 01:29:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:40.869 01:29:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:29:40.869 01:29:14 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:29:40.869 01:29:14 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:40.869 01:29:14 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:29:40.869 01:29:14 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:40.869 01:29:14 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:41.130 01:29:14 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=15367675-d63f-4aef-9c2c-a96715ec7b42 00:29:41.130 01:29:14 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:29:41.130 01:29:14 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 15367675-d63f-4aef-9c2c-a96715ec7b42 00:29:41.390 01:29:14 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:41.657 01:29:15 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=53c45f5e-4ddd-4721-98c3-cf176fd259aa 00:29:41.657 01:29:15 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 53c45f5e-4ddd-4721-98c3-cf176fd259aa 00:29:41.657 01:29:15 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=b760cde4-069a-4d6b-94a0-516be3af1614 00:29:41.657 01:29:15 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:29:41.657 01:29:15 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b760cde4-069a-4d6b-94a0-516be3af1614 00:29:41.657 01:29:15 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:29:41.657 01:29:15 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:41.658 01:29:15 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=b760cde4-069a-4d6b-94a0-516be3af1614 00:29:41.658 01:29:15 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:29:41.658 01:29:15 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size b760cde4-069a-4d6b-94a0-516be3af1614 00:29:41.658 01:29:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=b760cde4-069a-4d6b-94a0-516be3af1614 00:29:41.658 01:29:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:41.658 01:29:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:41.658 01:29:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:41.658 01:29:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b760cde4-069a-4d6b-94a0-516be3af1614 00:29:41.920 01:29:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:41.920 { 00:29:41.920 "name": "b760cde4-069a-4d6b-94a0-516be3af1614", 00:29:41.920 "aliases": [ 00:29:41.920 "lvs/nvme0n1p0" 00:29:41.920 ], 00:29:41.920 "product_name": "Logical Volume", 00:29:41.920 "block_size": 4096, 00:29:41.920 "num_blocks": 26476544, 00:29:41.920 "uuid": "b760cde4-069a-4d6b-94a0-516be3af1614", 00:29:41.920 "assigned_rate_limits": { 00:29:41.920 "rw_ios_per_sec": 0, 00:29:41.920 "rw_mbytes_per_sec": 0, 00:29:41.920 "r_mbytes_per_sec": 0, 00:29:41.920 "w_mbytes_per_sec": 0 00:29:41.920 }, 00:29:41.920 "claimed": false, 00:29:41.920 "zoned": false, 00:29:41.920 "supported_io_types": { 00:29:41.920 "read": true, 00:29:41.920 "write": true, 00:29:41.920 "unmap": true, 00:29:41.920 "flush": false, 00:29:41.920 "reset": true, 00:29:41.920 "nvme_admin": false, 00:29:41.920 "nvme_io": false, 00:29:41.920 "nvme_io_md": false, 00:29:41.920 "write_zeroes": true, 00:29:41.920 "zcopy": false, 00:29:41.920 "get_zone_info": false, 00:29:41.920 "zone_management": false, 00:29:41.920 "zone_append": false, 00:29:41.920 "compare": false, 00:29:41.920 "compare_and_write": false, 00:29:41.920 "abort": false, 00:29:41.920 "seek_hole": true, 00:29:41.920 "seek_data": true, 00:29:41.920 "copy": false, 00:29:41.920 "nvme_iov_md": false 00:29:41.920 }, 00:29:41.920 "driver_specific": { 00:29:41.920 "lvol": { 00:29:41.920 "lvol_store_uuid": "53c45f5e-4ddd-4721-98c3-cf176fd259aa", 00:29:41.920 "base_bdev": "nvme0n1", 00:29:41.920 "thin_provision": true, 00:29:41.920 "num_allocated_clusters": 0, 00:29:41.920 "snapshot": false, 00:29:41.920 "clone": false, 00:29:41.920 "esnap_clone": false 00:29:41.920 } 00:29:41.920 } 00:29:41.920 } 00:29:41.920 ]' 00:29:41.920 01:29:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:41.920 01:29:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:41.920 01:29:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:41.920 01:29:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:41.920 01:29:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:41.920 01:29:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:41.920 01:29:15 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:29:41.920 01:29:15 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:29:41.920 01:29:15 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:42.180 01:29:15 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:42.180 01:29:15 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:42.180 01:29:15 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size b760cde4-069a-4d6b-94a0-516be3af1614 00:29:42.180 01:29:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=b760cde4-069a-4d6b-94a0-516be3af1614 00:29:42.180 01:29:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:42.181 01:29:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:42.181 01:29:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:42.181 01:29:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b760cde4-069a-4d6b-94a0-516be3af1614 00:29:42.442 01:29:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:42.442 { 00:29:42.442 "name": "b760cde4-069a-4d6b-94a0-516be3af1614", 00:29:42.442 "aliases": [ 00:29:42.442 "lvs/nvme0n1p0" 00:29:42.442 ], 00:29:42.442 "product_name": "Logical Volume", 00:29:42.442 "block_size": 4096, 00:29:42.442 "num_blocks": 26476544, 00:29:42.442 "uuid": "b760cde4-069a-4d6b-94a0-516be3af1614", 00:29:42.442 "assigned_rate_limits": { 00:29:42.442 "rw_ios_per_sec": 0, 00:29:42.442 "rw_mbytes_per_sec": 0, 00:29:42.442 "r_mbytes_per_sec": 0, 00:29:42.442 "w_mbytes_per_sec": 0 00:29:42.442 }, 00:29:42.442 "claimed": false, 00:29:42.442 "zoned": false, 00:29:42.442 "supported_io_types": { 00:29:42.442 "read": true, 00:29:42.442 "write": true, 00:29:42.442 "unmap": true, 00:29:42.442 "flush": false, 00:29:42.442 "reset": true, 00:29:42.442 "nvme_admin": false, 00:29:42.442 "nvme_io": false, 00:29:42.442 "nvme_io_md": false, 00:29:42.442 "write_zeroes": true, 00:29:42.442 "zcopy": false, 00:29:42.442 "get_zone_info": false, 00:29:42.442 "zone_management": false, 00:29:42.442 "zone_append": false, 00:29:42.442 "compare": false, 00:29:42.442 "compare_and_write": false, 00:29:42.442 "abort": false, 00:29:42.442 "seek_hole": true, 00:29:42.442 "seek_data": true, 00:29:42.442 "copy": false, 00:29:42.442 "nvme_iov_md": false 00:29:42.442 }, 00:29:42.442 "driver_specific": { 00:29:42.442 "lvol": { 00:29:42.442 "lvol_store_uuid": "53c45f5e-4ddd-4721-98c3-cf176fd259aa", 00:29:42.442 "base_bdev": "nvme0n1", 00:29:42.442 "thin_provision": true, 00:29:42.442 "num_allocated_clusters": 0, 00:29:42.442 "snapshot": false, 00:29:42.442 "clone": false, 00:29:42.442 "esnap_clone": false 00:29:42.442 } 00:29:42.442 } 00:29:42.442 } 00:29:42.442 ]' 00:29:42.442 01:29:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:42.442 01:29:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:42.442 01:29:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:42.442 01:29:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:42.442 01:29:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:42.442 01:29:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:42.442 01:29:16 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:29:42.442 01:29:16 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:42.703 01:29:16 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:29:42.703 01:29:16 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size b760cde4-069a-4d6b-94a0-516be3af1614 00:29:42.703 01:29:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=b760cde4-069a-4d6b-94a0-516be3af1614 00:29:42.703 01:29:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:42.703 01:29:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:42.703 01:29:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:42.703 01:29:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b760cde4-069a-4d6b-94a0-516be3af1614 00:29:42.964 01:29:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:42.964 { 00:29:42.964 "name": "b760cde4-069a-4d6b-94a0-516be3af1614", 00:29:42.964 "aliases": [ 00:29:42.964 "lvs/nvme0n1p0" 00:29:42.964 ], 00:29:42.964 "product_name": "Logical Volume", 00:29:42.964 "block_size": 4096, 00:29:42.964 "num_blocks": 26476544, 00:29:42.964 "uuid": "b760cde4-069a-4d6b-94a0-516be3af1614", 00:29:42.964 "assigned_rate_limits": { 00:29:42.964 "rw_ios_per_sec": 0, 00:29:42.964 "rw_mbytes_per_sec": 0, 00:29:42.964 "r_mbytes_per_sec": 0, 00:29:42.964 "w_mbytes_per_sec": 0 00:29:42.964 }, 00:29:42.964 "claimed": false, 00:29:42.964 "zoned": false, 00:29:42.964 "supported_io_types": { 00:29:42.964 "read": true, 00:29:42.964 "write": true, 00:29:42.964 "unmap": true, 00:29:42.964 "flush": false, 00:29:42.964 "reset": true, 00:29:42.964 "nvme_admin": false, 00:29:42.964 "nvme_io": false, 00:29:42.964 "nvme_io_md": false, 00:29:42.964 "write_zeroes": true, 00:29:42.964 "zcopy": false, 00:29:42.964 "get_zone_info": false, 00:29:42.964 "zone_management": false, 00:29:42.964 "zone_append": false, 00:29:42.964 "compare": false, 00:29:42.964 "compare_and_write": false, 00:29:42.964 "abort": false, 00:29:42.964 "seek_hole": true, 00:29:42.964 "seek_data": true, 00:29:42.964 "copy": false, 00:29:42.964 "nvme_iov_md": false 00:29:42.964 }, 00:29:42.964 "driver_specific": { 00:29:42.964 "lvol": { 00:29:42.964 "lvol_store_uuid": "53c45f5e-4ddd-4721-98c3-cf176fd259aa", 00:29:42.964 "base_bdev": "nvme0n1", 00:29:42.964 "thin_provision": true, 00:29:42.964 "num_allocated_clusters": 0, 00:29:42.964 "snapshot": false, 00:29:42.964 "clone": false, 00:29:42.964 "esnap_clone": false 00:29:42.964 } 00:29:42.964 } 00:29:42.964 } 00:29:42.964 ]' 00:29:42.964 01:29:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:42.964 01:29:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:42.964 01:29:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:42.964 01:29:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:42.965 01:29:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:42.965 01:29:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:42.965 01:29:16 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:29:42.965 01:29:16 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b760cde4-069a-4d6b-94a0-516be3af1614 --l2p_dram_limit 10' 00:29:42.965 01:29:16 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:29:42.965 01:29:16 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:29:42.965 01:29:16 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:42.965 01:29:16 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:29:42.965 01:29:16 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:29:42.965 01:29:16 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b760cde4-069a-4d6b-94a0-516be3af1614 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:29:43.226 [2024-12-14 01:29:16.703657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.226 [2024-12-14 01:29:16.703700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:43.226 [2024-12-14 01:29:16.703711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:43.226 [2024-12-14 01:29:16.703719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.226 [2024-12-14 01:29:16.703764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.227 [2024-12-14 01:29:16.703773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:43.227 [2024-12-14 01:29:16.703781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:29:43.227 [2024-12-14 01:29:16.703790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.227 [2024-12-14 01:29:16.703804] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:43.227 [2024-12-14 01:29:16.704335] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:43.227 [2024-12-14 01:29:16.704417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.227 [2024-12-14 01:29:16.704446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:43.227 [2024-12-14 01:29:16.704480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:29:43.227 [2024-12-14 01:29:16.704504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.227 [2024-12-14 01:29:16.704746] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2cfa1acf-32a7-4ab3-bb7b-0c6c0a7a7dea 00:29:43.227 [2024-12-14 01:29:16.706486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.227 [2024-12-14 01:29:16.706553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:43.227 [2024-12-14 01:29:16.706587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:29:43.227 [2024-12-14 01:29:16.706607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.227 [2024-12-14 01:29:16.714045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.227 [2024-12-14 01:29:16.714109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:43.227 [2024-12-14 01:29:16.714136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.295 ms 00:29:43.227 [2024-12-14 01:29:16.714156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.227 [2024-12-14 01:29:16.714338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.227 [2024-12-14 01:29:16.714369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:43.227 [2024-12-14 01:29:16.714400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:29:43.227 [2024-12-14 01:29:16.714427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.227 [2024-12-14 01:29:16.714574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.227 [2024-12-14 01:29:16.714647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:43.227 [2024-12-14 01:29:16.714681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:43.227 [2024-12-14 01:29:16.714708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.227 [2024-12-14 01:29:16.714774] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:43.227 [2024-12-14 01:29:16.717688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.227 [2024-12-14 01:29:16.717751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:43.227 [2024-12-14 01:29:16.717774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.931 ms 00:29:43.227 [2024-12-14 01:29:16.717797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.227 [2024-12-14 01:29:16.717895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.227 [2024-12-14 01:29:16.717931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:43.227 [2024-12-14 01:29:16.717960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:29:43.227 [2024-12-14 01:29:16.717994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.227 [2024-12-14 01:29:16.718044] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:43.227 [2024-12-14 01:29:16.718282] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:43.227 [2024-12-14 01:29:16.718298] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:43.227 [2024-12-14 01:29:16.718311] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:43.227 [2024-12-14 01:29:16.718325] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:43.227 [2024-12-14 01:29:16.718340] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:43.227 [2024-12-14 01:29:16.718349] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:43.227 [2024-12-14 01:29:16.718361] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:43.227 [2024-12-14 01:29:16.718369] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:43.227 [2024-12-14 01:29:16.718380] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:43.227 [2024-12-14 01:29:16.718392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.227 [2024-12-14 01:29:16.718402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:43.227 [2024-12-14 01:29:16.718414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:29:43.227 [2024-12-14 01:29:16.718424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.227 [2024-12-14 01:29:16.718513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.227 [2024-12-14 01:29:16.718528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:43.227 [2024-12-14 01:29:16.718540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:43.227 [2024-12-14 01:29:16.718552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.227 [2024-12-14 01:29:16.718674] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:43.227 [2024-12-14 01:29:16.718691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:43.227 [2024-12-14 01:29:16.718700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:43.227 [2024-12-14 01:29:16.718716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:43.227 [2024-12-14 01:29:16.718726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:43.227 [2024-12-14 01:29:16.718734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:43.227 [2024-12-14 01:29:16.718741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:43.227 [2024-12-14 01:29:16.718749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:43.227 [2024-12-14 01:29:16.718756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:43.227 [2024-12-14 01:29:16.718763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:43.227 [2024-12-14 01:29:16.718770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:43.227 [2024-12-14 01:29:16.718778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:43.227 [2024-12-14 01:29:16.718784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:43.227 [2024-12-14 01:29:16.718794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:43.227 [2024-12-14 01:29:16.718801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:43.227 [2024-12-14 01:29:16.718808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:43.227 [2024-12-14 01:29:16.718814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:43.227 [2024-12-14 01:29:16.718822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:43.227 [2024-12-14 01:29:16.718828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:43.227 [2024-12-14 01:29:16.718837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:43.227 [2024-12-14 01:29:16.718843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:43.227 [2024-12-14 01:29:16.718851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:43.227 [2024-12-14 01:29:16.718857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:43.227 [2024-12-14 01:29:16.718865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:43.227 [2024-12-14 01:29:16.718874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:43.227 [2024-12-14 01:29:16.718882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:43.227 [2024-12-14 01:29:16.718889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:43.227 [2024-12-14 01:29:16.718898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:43.227 [2024-12-14 01:29:16.718904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:43.227 [2024-12-14 01:29:16.718914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:43.227 [2024-12-14 01:29:16.718920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:43.227 [2024-12-14 01:29:16.718928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:43.227 [2024-12-14 01:29:16.718934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:43.227 [2024-12-14 01:29:16.718942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:43.227 [2024-12-14 01:29:16.718948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:43.227 [2024-12-14 01:29:16.718956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:43.227 [2024-12-14 01:29:16.718962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:43.227 [2024-12-14 01:29:16.718970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:43.227 [2024-12-14 01:29:16.718977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:43.227 [2024-12-14 01:29:16.718985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:43.227 [2024-12-14 01:29:16.718992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:43.227 [2024-12-14 01:29:16.719001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:43.227 [2024-12-14 01:29:16.719007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:43.227 [2024-12-14 01:29:16.719014] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:43.227 [2024-12-14 01:29:16.719026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:43.227 [2024-12-14 01:29:16.719036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:43.227 [2024-12-14 01:29:16.719044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:43.227 [2024-12-14 01:29:16.719053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:43.227 [2024-12-14 01:29:16.719059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:43.227 [2024-12-14 01:29:16.719067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:43.227 [2024-12-14 01:29:16.719073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:43.227 [2024-12-14 01:29:16.719082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:43.227 [2024-12-14 01:29:16.719089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:43.227 [2024-12-14 01:29:16.719099] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:43.228 [2024-12-14 01:29:16.719110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:43.228 [2024-12-14 01:29:16.719120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:43.228 [2024-12-14 01:29:16.719128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:43.228 [2024-12-14 01:29:16.719137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:43.228 [2024-12-14 01:29:16.719144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:43.228 [2024-12-14 01:29:16.719152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:43.228 [2024-12-14 01:29:16.719159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:43.228 [2024-12-14 01:29:16.719169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:43.228 [2024-12-14 01:29:16.719176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:43.228 [2024-12-14 01:29:16.719184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:43.228 [2024-12-14 01:29:16.719191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:43.228 [2024-12-14 01:29:16.719199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:43.228 [2024-12-14 01:29:16.719206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:43.228 [2024-12-14 01:29:16.719214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:43.228 [2024-12-14 01:29:16.719221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:43.228 [2024-12-14 01:29:16.719230] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:43.228 [2024-12-14 01:29:16.719237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:43.228 [2024-12-14 01:29:16.719246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:43.228 [2024-12-14 01:29:16.719253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:43.228 [2024-12-14 01:29:16.719261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:43.228 [2024-12-14 01:29:16.719268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:43.228 [2024-12-14 01:29:16.719277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.228 [2024-12-14 01:29:16.719284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:43.228 [2024-12-14 01:29:16.719296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:29:43.228 [2024-12-14 01:29:16.719303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.228 [2024-12-14 01:29:16.719340] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:43.228 [2024-12-14 01:29:16.719352] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:46.534 [2024-12-14 01:29:19.928477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.534 [2024-12-14 01:29:19.928554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:46.534 [2024-12-14 01:29:19.928572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3209.117 ms 00:29:46.534 [2024-12-14 01:29:19.928581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.534 [2024-12-14 01:29:19.937591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.534 [2024-12-14 01:29:19.937659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:46.534 [2024-12-14 01:29:19.937674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.914 ms 00:29:46.534 [2024-12-14 01:29:19.937682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.534 [2024-12-14 01:29:19.937796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.534 [2024-12-14 01:29:19.937806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:46.534 [2024-12-14 01:29:19.937817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:46.534 [2024-12-14 01:29:19.937824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.534 [2024-12-14 01:29:19.946928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.534 [2024-12-14 01:29:19.946966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:46.534 [2024-12-14 01:29:19.946978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.043 ms 00:29:46.534 [2024-12-14 01:29:19.946989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.534 [2024-12-14 01:29:19.947025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.534 [2024-12-14 01:29:19.947034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:46.534 [2024-12-14 01:29:19.947044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:46.534 [2024-12-14 01:29:19.947051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.534 [2024-12-14 01:29:19.947431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.534 [2024-12-14 01:29:19.947457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:46.534 [2024-12-14 01:29:19.947468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:29:46.534 [2024-12-14 01:29:19.947476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.534 [2024-12-14 01:29:19.947590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.534 [2024-12-14 01:29:19.947600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:46.534 [2024-12-14 01:29:19.947611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:29:46.534 [2024-12-14 01:29:19.947653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.534 [2024-12-14 01:29:19.953563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.534 [2024-12-14 01:29:19.953599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:46.534 [2024-12-14 01:29:19.953610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.887 ms 00:29:46.534 [2024-12-14 01:29:19.953643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.534 [2024-12-14 01:29:19.972052] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:46.534 [2024-12-14 01:29:19.975364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.534 [2024-12-14 01:29:19.975403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:46.534 [2024-12-14 01:29:19.975415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.651 ms 00:29:46.534 [2024-12-14 01:29:19.975425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.534 [2024-12-14 01:29:20.050322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.534 [2024-12-14 01:29:20.050384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:46.534 [2024-12-14 01:29:20.050400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.857 ms 00:29:46.534 [2024-12-14 01:29:20.050413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.534 [2024-12-14 01:29:20.050598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.534 [2024-12-14 01:29:20.050612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:46.534 [2024-12-14 01:29:20.050635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:29:46.534 [2024-12-14 01:29:20.050646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.534 [2024-12-14 01:29:20.055011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.534 [2024-12-14 01:29:20.055060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:46.534 [2024-12-14 01:29:20.055073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.346 ms 00:29:46.534 [2024-12-14 01:29:20.055083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.534 [2024-12-14 01:29:20.059297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.534 [2024-12-14 01:29:20.059343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:46.534 [2024-12-14 01:29:20.059353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.191 ms 00:29:46.534 [2024-12-14 01:29:20.059362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.534 [2024-12-14 01:29:20.059681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.534 [2024-12-14 01:29:20.059695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:46.534 [2024-12-14 01:29:20.059704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:29:46.534 [2024-12-14 01:29:20.059716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.534 [2024-12-14 01:29:20.097811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.534 [2024-12-14 01:29:20.097863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:46.534 [2024-12-14 01:29:20.097877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.075 ms 00:29:46.534 [2024-12-14 01:29:20.097888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.534 [2024-12-14 01:29:20.103593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.534 [2024-12-14 01:29:20.103651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:46.534 [2024-12-14 01:29:20.103667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.654 ms 00:29:46.534 [2024-12-14 01:29:20.103677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.534 [2024-12-14 01:29:20.108293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.534 [2024-12-14 01:29:20.108341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:46.534 [2024-12-14 01:29:20.108350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.577 ms 00:29:46.534 [2024-12-14 01:29:20.108359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.534 [2024-12-14 01:29:20.113580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.534 [2024-12-14 01:29:20.113660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:46.534 [2024-12-14 01:29:20.113671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.183 ms 00:29:46.534 [2024-12-14 01:29:20.113683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.534 [2024-12-14 01:29:20.113726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.534 [2024-12-14 01:29:20.113737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:46.534 [2024-12-14 01:29:20.113746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:46.534 [2024-12-14 01:29:20.113756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.534 [2024-12-14 01:29:20.113834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.534 [2024-12-14 01:29:20.113847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:46.534 [2024-12-14 01:29:20.113855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:29:46.534 [2024-12-14 01:29:20.113872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.534 [2024-12-14 01:29:20.114895] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3410.776 ms, result 0 00:29:46.534 { 00:29:46.534 "name": "ftl0", 00:29:46.534 "uuid": "2cfa1acf-32a7-4ab3-bb7b-0c6c0a7a7dea" 00:29:46.534 } 00:29:46.795 01:29:20 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:29:46.795 01:29:20 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:46.795 01:29:20 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:29:46.795 01:29:20 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:47.086 [2024-12-14 01:29:20.542402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.086 [2024-12-14 01:29:20.542466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:47.086 [2024-12-14 01:29:20.542485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:47.086 [2024-12-14 01:29:20.542495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.086 [2024-12-14 01:29:20.542528] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:47.086 [2024-12-14 01:29:20.543296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.086 [2024-12-14 01:29:20.543348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:47.086 [2024-12-14 01:29:20.543361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:29:47.086 [2024-12-14 01:29:20.543384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.086 [2024-12-14 01:29:20.543680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.086 [2024-12-14 01:29:20.543700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:47.086 [2024-12-14 01:29:20.543710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:29:47.086 [2024-12-14 01:29:20.543725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.086 [2024-12-14 01:29:20.546975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.086 [2024-12-14 01:29:20.547007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:47.086 [2024-12-14 01:29:20.547017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.234 ms 00:29:47.086 [2024-12-14 01:29:20.547028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.086 [2024-12-14 01:29:20.553197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.086 [2024-12-14 01:29:20.553246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:47.086 [2024-12-14 01:29:20.553257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.150 ms 00:29:47.086 [2024-12-14 01:29:20.553272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.086 [2024-12-14 01:29:20.556391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.086 [2024-12-14 01:29:20.556454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:47.086 [2024-12-14 01:29:20.556465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.038 ms 00:29:47.086 [2024-12-14 01:29:20.556475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.086 [2024-12-14 01:29:20.563471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.086 [2024-12-14 01:29:20.563531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:47.086 [2024-12-14 01:29:20.563543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.949 ms 00:29:47.086 [2024-12-14 01:29:20.563554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.086 [2024-12-14 01:29:20.563711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.086 [2024-12-14 01:29:20.563731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:47.086 [2024-12-14 01:29:20.563743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:29:47.086 [2024-12-14 01:29:20.563753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.086 [2024-12-14 01:29:20.566807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.086 [2024-12-14 01:29:20.566866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:47.086 [2024-12-14 01:29:20.566876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.035 ms 00:29:47.086 [2024-12-14 01:29:20.566886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.086 [2024-12-14 01:29:20.569520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.086 [2024-12-14 01:29:20.569576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:47.086 [2024-12-14 01:29:20.569586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.589 ms 00:29:47.086 [2024-12-14 01:29:20.569597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.086 [2024-12-14 01:29:20.571805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.086 [2024-12-14 01:29:20.571861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:47.087 [2024-12-14 01:29:20.571871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.129 ms 00:29:47.087 [2024-12-14 01:29:20.571880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.087 [2024-12-14 01:29:20.574013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.087 [2024-12-14 01:29:20.574071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:47.087 [2024-12-14 01:29:20.574081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.060 ms 00:29:47.087 [2024-12-14 01:29:20.574090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.087 [2024-12-14 01:29:20.574132] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:47.087 [2024-12-14 01:29:20.574149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:47.087 [2024-12-14 01:29:20.574963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:47.088 [2024-12-14 01:29:20.574972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:47.088 [2024-12-14 01:29:20.574979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:47.088 [2024-12-14 01:29:20.574989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:47.088 [2024-12-14 01:29:20.574997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:47.088 [2024-12-14 01:29:20.575006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:47.088 [2024-12-14 01:29:20.575014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:47.088 [2024-12-14 01:29:20.575022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:47.088 [2024-12-14 01:29:20.575030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:47.088 [2024-12-14 01:29:20.575040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:47.088 [2024-12-14 01:29:20.575050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:47.088 [2024-12-14 01:29:20.575061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:47.088 [2024-12-14 01:29:20.575069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:47.088 [2024-12-14 01:29:20.575078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:47.088 [2024-12-14 01:29:20.575086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:47.088 [2024-12-14 01:29:20.575106] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:47.088 [2024-12-14 01:29:20.575114] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2cfa1acf-32a7-4ab3-bb7b-0c6c0a7a7dea 00:29:47.088 [2024-12-14 01:29:20.575124] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:47.088 [2024-12-14 01:29:20.575132] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:47.088 [2024-12-14 01:29:20.575141] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:47.088 [2024-12-14 01:29:20.575150] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:47.088 [2024-12-14 01:29:20.575159] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:47.088 [2024-12-14 01:29:20.575170] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:47.088 [2024-12-14 01:29:20.575195] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:47.088 [2024-12-14 01:29:20.575202] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:47.088 [2024-12-14 01:29:20.575210] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:47.088 [2024-12-14 01:29:20.575217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.088 [2024-12-14 01:29:20.575227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:47.088 [2024-12-14 01:29:20.575236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.087 ms 00:29:47.088 [2024-12-14 01:29:20.575245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.088 [2024-12-14 01:29:20.577542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.088 [2024-12-14 01:29:20.577586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:47.088 [2024-12-14 01:29:20.577596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.260 ms 00:29:47.088 [2024-12-14 01:29:20.577609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.088 [2024-12-14 01:29:20.577791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.088 [2024-12-14 01:29:20.577806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:47.088 [2024-12-14 01:29:20.577816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:29:47.088 [2024-12-14 01:29:20.577825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.088 [2024-12-14 01:29:20.585748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.088 [2024-12-14 01:29:20.585804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:47.088 [2024-12-14 01:29:20.585823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.088 [2024-12-14 01:29:20.585834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.088 [2024-12-14 01:29:20.585899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.088 [2024-12-14 01:29:20.585910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:47.088 [2024-12-14 01:29:20.585919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.088 [2024-12-14 01:29:20.585929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.088 [2024-12-14 01:29:20.585993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.088 [2024-12-14 01:29:20.586010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:47.088 [2024-12-14 01:29:20.586019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.088 [2024-12-14 01:29:20.586031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.088 [2024-12-14 01:29:20.586049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.088 [2024-12-14 01:29:20.586060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:47.088 [2024-12-14 01:29:20.586067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.088 [2024-12-14 01:29:20.586077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.088 [2024-12-14 01:29:20.600060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.088 [2024-12-14 01:29:20.600124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:47.088 [2024-12-14 01:29:20.600136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.088 [2024-12-14 01:29:20.600150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.088 [2024-12-14 01:29:20.611100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.088 [2024-12-14 01:29:20.611163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:47.088 [2024-12-14 01:29:20.611174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.088 [2024-12-14 01:29:20.611185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.088 [2024-12-14 01:29:20.611262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.088 [2024-12-14 01:29:20.611279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:47.088 [2024-12-14 01:29:20.611288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.088 [2024-12-14 01:29:20.611299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.088 [2024-12-14 01:29:20.611350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.088 [2024-12-14 01:29:20.611363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:47.088 [2024-12-14 01:29:20.611371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.088 [2024-12-14 01:29:20.611381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.088 [2024-12-14 01:29:20.611455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.088 [2024-12-14 01:29:20.611468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:47.088 [2024-12-14 01:29:20.611476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.088 [2024-12-14 01:29:20.611486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.088 [2024-12-14 01:29:20.611519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.088 [2024-12-14 01:29:20.611534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:47.088 [2024-12-14 01:29:20.611542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.088 [2024-12-14 01:29:20.611552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.088 [2024-12-14 01:29:20.611594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.088 [2024-12-14 01:29:20.611608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:47.088 [2024-12-14 01:29:20.611616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.088 [2024-12-14 01:29:20.611646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.088 [2024-12-14 01:29:20.611698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.088 [2024-12-14 01:29:20.611710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:47.088 [2024-12-14 01:29:20.611718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.088 [2024-12-14 01:29:20.611728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.088 [2024-12-14 01:29:20.611869] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.434 ms, result 0 00:29:47.088 true 00:29:47.088 01:29:20 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 96072 00:29:47.088 01:29:20 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 96072 ']' 00:29:47.088 01:29:20 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 96072 00:29:47.088 01:29:20 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:29:47.088 01:29:20 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:47.088 01:29:20 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96072 00:29:47.088 01:29:20 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:47.088 killing process with pid 96072 00:29:47.088 01:29:20 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:47.088 01:29:20 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96072' 00:29:47.088 01:29:20 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 96072 00:29:47.088 01:29:20 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 96072 00:29:52.399 01:29:25 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:29:56.608 262144+0 records in 00:29:56.608 262144+0 records out 00:29:56.608 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.49683 s, 239 MB/s 00:29:56.608 01:29:29 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:58.521 01:29:31 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:58.521 [2024-12-14 01:29:31.801181] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:58.521 [2024-12-14 01:29:31.801310] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96290 ] 00:29:58.521 [2024-12-14 01:29:31.945607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.521 [2024-12-14 01:29:31.965664] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.521 [2024-12-14 01:29:32.057794] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:58.521 [2024-12-14 01:29:32.057881] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:58.784 [2024-12-14 01:29:32.214667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.784 [2024-12-14 01:29:32.214713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:58.784 [2024-12-14 01:29:32.214732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:58.784 [2024-12-14 01:29:32.214744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.784 [2024-12-14 01:29:32.214812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.784 [2024-12-14 01:29:32.214827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:58.784 [2024-12-14 01:29:32.214840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:58.784 [2024-12-14 01:29:32.214861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.784 [2024-12-14 01:29:32.214900] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:58.784 [2024-12-14 01:29:32.215177] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:58.784 [2024-12-14 01:29:32.215200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.784 [2024-12-14 01:29:32.215215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:58.784 [2024-12-14 01:29:32.215232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:29:58.784 [2024-12-14 01:29:32.215240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.784 [2024-12-14 01:29:32.216522] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:58.784 [2024-12-14 01:29:32.219181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.784 [2024-12-14 01:29:32.219217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:58.784 [2024-12-14 01:29:32.219233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.661 ms 00:29:58.784 [2024-12-14 01:29:32.219252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.784 [2024-12-14 01:29:32.219314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.784 [2024-12-14 01:29:32.219328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:58.784 [2024-12-14 01:29:32.219344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:29:58.784 [2024-12-14 01:29:32.219355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.784 [2024-12-14 01:29:32.224494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.784 [2024-12-14 01:29:32.224533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:58.784 [2024-12-14 01:29:32.224552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.062 ms 00:29:58.784 [2024-12-14 01:29:32.224563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.784 [2024-12-14 01:29:32.224686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.784 [2024-12-14 01:29:32.224705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:58.784 [2024-12-14 01:29:32.224721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:29:58.784 [2024-12-14 01:29:32.224728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.784 [2024-12-14 01:29:32.224773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.784 [2024-12-14 01:29:32.224787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:58.784 [2024-12-14 01:29:32.224799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:58.784 [2024-12-14 01:29:32.224813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.784 [2024-12-14 01:29:32.224843] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:58.784 [2024-12-14 01:29:32.226306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.784 [2024-12-14 01:29:32.226340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:58.784 [2024-12-14 01:29:32.226353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.470 ms 00:29:58.784 [2024-12-14 01:29:32.226365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.784 [2024-12-14 01:29:32.226409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.784 [2024-12-14 01:29:32.226422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:58.784 [2024-12-14 01:29:32.226435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:29:58.784 [2024-12-14 01:29:32.226449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.784 [2024-12-14 01:29:32.226477] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:58.784 [2024-12-14 01:29:32.226511] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:58.784 [2024-12-14 01:29:32.226556] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:58.784 [2024-12-14 01:29:32.226582] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:58.784 [2024-12-14 01:29:32.226732] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:58.784 [2024-12-14 01:29:32.226750] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:58.784 [2024-12-14 01:29:32.226770] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:58.784 [2024-12-14 01:29:32.226790] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:58.784 [2024-12-14 01:29:32.226803] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:58.784 [2024-12-14 01:29:32.226815] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:58.784 [2024-12-14 01:29:32.226827] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:58.784 [2024-12-14 01:29:32.226841] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:58.784 [2024-12-14 01:29:32.226853] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:58.784 [2024-12-14 01:29:32.226865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.784 [2024-12-14 01:29:32.226881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:58.784 [2024-12-14 01:29:32.226900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:29:58.784 [2024-12-14 01:29:32.226911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.784 [2024-12-14 01:29:32.227030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.784 [2024-12-14 01:29:32.227053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:58.784 [2024-12-14 01:29:32.227070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:29:58.784 [2024-12-14 01:29:32.227082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.784 [2024-12-14 01:29:32.227209] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:58.784 [2024-12-14 01:29:32.227225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:58.784 [2024-12-14 01:29:32.227238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:58.784 [2024-12-14 01:29:32.227255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:58.784 [2024-12-14 01:29:32.227267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:58.784 [2024-12-14 01:29:32.227278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:58.785 [2024-12-14 01:29:32.227289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:58.785 [2024-12-14 01:29:32.227300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:58.785 [2024-12-14 01:29:32.227311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:58.785 [2024-12-14 01:29:32.227321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:58.785 [2024-12-14 01:29:32.227332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:58.785 [2024-12-14 01:29:32.227342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:58.785 [2024-12-14 01:29:32.227357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:58.785 [2024-12-14 01:29:32.227369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:58.785 [2024-12-14 01:29:32.227380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:58.785 [2024-12-14 01:29:32.227390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:58.785 [2024-12-14 01:29:32.227401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:58.785 [2024-12-14 01:29:32.227412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:58.785 [2024-12-14 01:29:32.227423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:58.785 [2024-12-14 01:29:32.227433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:58.785 [2024-12-14 01:29:32.227445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:58.785 [2024-12-14 01:29:32.227456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:58.785 [2024-12-14 01:29:32.227467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:58.785 [2024-12-14 01:29:32.227478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:58.785 [2024-12-14 01:29:32.227489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:58.785 [2024-12-14 01:29:32.227500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:58.785 [2024-12-14 01:29:32.227511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:58.785 [2024-12-14 01:29:32.227522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:58.785 [2024-12-14 01:29:32.227539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:58.785 [2024-12-14 01:29:32.227551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:58.785 [2024-12-14 01:29:32.227562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:58.785 [2024-12-14 01:29:32.227574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:58.785 [2024-12-14 01:29:32.227585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:58.785 [2024-12-14 01:29:32.227595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:58.785 [2024-12-14 01:29:32.227604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:58.785 [2024-12-14 01:29:32.227614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:58.785 [2024-12-14 01:29:32.227649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:58.785 [2024-12-14 01:29:32.227660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:58.785 [2024-12-14 01:29:32.227671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:58.785 [2024-12-14 01:29:32.227682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:58.785 [2024-12-14 01:29:32.227692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:58.785 [2024-12-14 01:29:32.227702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:58.785 [2024-12-14 01:29:32.227714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:58.785 [2024-12-14 01:29:32.227724] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:58.785 [2024-12-14 01:29:32.227742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:58.785 [2024-12-14 01:29:32.227756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:58.785 [2024-12-14 01:29:32.227767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:58.785 [2024-12-14 01:29:32.227779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:58.785 [2024-12-14 01:29:32.227790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:58.785 [2024-12-14 01:29:32.227800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:58.785 [2024-12-14 01:29:32.227811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:58.785 [2024-12-14 01:29:32.227822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:58.785 [2024-12-14 01:29:32.227833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:58.785 [2024-12-14 01:29:32.227846] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:58.785 [2024-12-14 01:29:32.227861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:58.785 [2024-12-14 01:29:32.227874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:58.785 [2024-12-14 01:29:32.227887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:58.785 [2024-12-14 01:29:32.227898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:58.785 [2024-12-14 01:29:32.227909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:58.785 [2024-12-14 01:29:32.227922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:58.785 [2024-12-14 01:29:32.227936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:58.785 [2024-12-14 01:29:32.227948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:58.785 [2024-12-14 01:29:32.227960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:58.785 [2024-12-14 01:29:32.227971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:58.785 [2024-12-14 01:29:32.227988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:58.785 [2024-12-14 01:29:32.228000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:58.785 [2024-12-14 01:29:32.228012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:58.785 [2024-12-14 01:29:32.228023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:58.785 [2024-12-14 01:29:32.228036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:58.785 [2024-12-14 01:29:32.228048] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:58.785 [2024-12-14 01:29:32.228062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:58.785 [2024-12-14 01:29:32.228074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:58.785 [2024-12-14 01:29:32.228085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:58.785 [2024-12-14 01:29:32.228096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:58.785 [2024-12-14 01:29:32.228108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:58.785 [2024-12-14 01:29:32.228121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.785 [2024-12-14 01:29:32.228143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:58.785 [2024-12-14 01:29:32.228155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:29:58.785 [2024-12-14 01:29:32.228177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.785 [2024-12-14 01:29:32.237443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.785 [2024-12-14 01:29:32.237488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:58.785 [2024-12-14 01:29:32.237502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.187 ms 00:29:58.785 [2024-12-14 01:29:32.237514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.785 [2024-12-14 01:29:32.237616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.785 [2024-12-14 01:29:32.237667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:58.785 [2024-12-14 01:29:32.237680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:29:58.785 [2024-12-14 01:29:32.237693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.785 [2024-12-14 01:29:32.256287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.785 [2024-12-14 01:29:32.256341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:58.785 [2024-12-14 01:29:32.256361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.535 ms 00:29:58.785 [2024-12-14 01:29:32.256378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.785 [2024-12-14 01:29:32.256447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.785 [2024-12-14 01:29:32.256466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:58.785 [2024-12-14 01:29:32.256483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:58.785 [2024-12-14 01:29:32.256505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.785 [2024-12-14 01:29:32.257011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.785 [2024-12-14 01:29:32.257044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:58.785 [2024-12-14 01:29:32.257061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:29:58.785 [2024-12-14 01:29:32.257076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.785 [2024-12-14 01:29:32.257285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.785 [2024-12-14 01:29:32.257306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:58.785 [2024-12-14 01:29:32.257322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:29:58.785 [2024-12-14 01:29:32.257338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.785 [2024-12-14 01:29:32.263668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.785 [2024-12-14 01:29:32.263707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:58.785 [2024-12-14 01:29:32.263721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.295 ms 00:29:58.785 [2024-12-14 01:29:32.263733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.786 [2024-12-14 01:29:32.266726] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:58.786 [2024-12-14 01:29:32.266764] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:58.786 [2024-12-14 01:29:32.266778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.786 [2024-12-14 01:29:32.266787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:58.786 [2024-12-14 01:29:32.266795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.931 ms 00:29:58.786 [2024-12-14 01:29:32.266802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.786 [2024-12-14 01:29:32.281485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.786 [2024-12-14 01:29:32.281531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:58.786 [2024-12-14 01:29:32.281541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.644 ms 00:29:58.786 [2024-12-14 01:29:32.281549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.786 [2024-12-14 01:29:32.283561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.786 [2024-12-14 01:29:32.283596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:58.786 [2024-12-14 01:29:32.283605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.972 ms 00:29:58.786 [2024-12-14 01:29:32.283612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.786 [2024-12-14 01:29:32.285669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.786 [2024-12-14 01:29:32.285699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:58.786 [2024-12-14 01:29:32.285707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.011 ms 00:29:58.786 [2024-12-14 01:29:32.285715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.786 [2024-12-14 01:29:32.286048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.786 [2024-12-14 01:29:32.286060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:58.786 [2024-12-14 01:29:32.286069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:29:58.786 [2024-12-14 01:29:32.286077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.786 [2024-12-14 01:29:32.305168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.786 [2024-12-14 01:29:32.305224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:58.786 [2024-12-14 01:29:32.305237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.076 ms 00:29:58.786 [2024-12-14 01:29:32.305245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.786 [2024-12-14 01:29:32.313243] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:58.786 [2024-12-14 01:29:32.316099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.786 [2024-12-14 01:29:32.316133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:58.786 [2024-12-14 01:29:32.316152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.795 ms 00:29:58.786 [2024-12-14 01:29:32.316160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.786 [2024-12-14 01:29:32.316230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.786 [2024-12-14 01:29:32.316240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:58.786 [2024-12-14 01:29:32.316249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:58.786 [2024-12-14 01:29:32.316264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.786 [2024-12-14 01:29:32.316349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.786 [2024-12-14 01:29:32.316359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:58.786 [2024-12-14 01:29:32.316367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:58.786 [2024-12-14 01:29:32.316378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.786 [2024-12-14 01:29:32.316400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.786 [2024-12-14 01:29:32.316408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:58.786 [2024-12-14 01:29:32.316417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:58.786 [2024-12-14 01:29:32.316424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.786 [2024-12-14 01:29:32.316453] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:58.786 [2024-12-14 01:29:32.316466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.786 [2024-12-14 01:29:32.316473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:58.786 [2024-12-14 01:29:32.316481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:58.786 [2024-12-14 01:29:32.316489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.786 [2024-12-14 01:29:32.320926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.786 [2024-12-14 01:29:32.320967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:58.786 [2024-12-14 01:29:32.320984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.415 ms 00:29:58.786 [2024-12-14 01:29:32.320992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.786 [2024-12-14 01:29:32.321061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.786 [2024-12-14 01:29:32.321071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:58.786 [2024-12-14 01:29:32.321080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:58.786 [2024-12-14 01:29:32.321087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.786 [2024-12-14 01:29:32.322700] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 107.629 ms, result 0 00:29:59.727  [2024-12-14T01:29:34.721Z] Copying: 15/1024 [MB] (15 MBps) [2024-12-14T01:29:35.663Z] Copying: 62/1024 [MB] (46 MBps) [2024-12-14T01:29:36.605Z] Copying: 112/1024 [MB] (49 MBps) [2024-12-14T01:29:37.547Z] Copying: 164/1024 [MB] (52 MBps) [2024-12-14T01:29:38.491Z] Copying: 192/1024 [MB] (27 MBps) [2024-12-14T01:29:39.434Z] Copying: 210/1024 [MB] (17 MBps) [2024-12-14T01:29:40.378Z] Copying: 257/1024 [MB] (46 MBps) [2024-12-14T01:29:41.764Z] Copying: 271/1024 [MB] (14 MBps) [2024-12-14T01:29:42.335Z] Copying: 300/1024 [MB] (28 MBps) [2024-12-14T01:29:43.721Z] Copying: 320/1024 [MB] (19 MBps) [2024-12-14T01:29:44.662Z] Copying: 353/1024 [MB] (33 MBps) [2024-12-14T01:29:45.603Z] Copying: 368/1024 [MB] (14 MBps) [2024-12-14T01:29:46.545Z] Copying: 393/1024 [MB] (25 MBps) [2024-12-14T01:29:47.487Z] Copying: 431/1024 [MB] (38 MBps) [2024-12-14T01:29:48.459Z] Copying: 483/1024 [MB] (51 MBps) [2024-12-14T01:29:49.403Z] Copying: 509/1024 [MB] (26 MBps) [2024-12-14T01:29:50.346Z] Copying: 530/1024 [MB] (20 MBps) [2024-12-14T01:29:51.733Z] Copying: 577/1024 [MB] (47 MBps) [2024-12-14T01:29:52.674Z] Copying: 601/1024 [MB] (23 MBps) [2024-12-14T01:29:53.616Z] Copying: 653/1024 [MB] (51 MBps) [2024-12-14T01:29:54.558Z] Copying: 702/1024 [MB] (48 MBps) [2024-12-14T01:29:55.501Z] Copying: 717/1024 [MB] (15 MBps) [2024-12-14T01:29:56.443Z] Copying: 744/1024 [MB] (27 MBps) [2024-12-14T01:29:57.387Z] Copying: 762/1024 [MB] (17 MBps) [2024-12-14T01:29:58.774Z] Copying: 777/1024 [MB] (14 MBps) [2024-12-14T01:29:59.347Z] Copying: 796/1024 [MB] (18 MBps) [2024-12-14T01:30:00.736Z] Copying: 811/1024 [MB] (15 MBps) [2024-12-14T01:30:01.678Z] Copying: 827/1024 [MB] (15 MBps) [2024-12-14T01:30:02.619Z] Copying: 844/1024 [MB] (17 MBps) [2024-12-14T01:30:03.562Z] Copying: 859/1024 [MB] (15 MBps) [2024-12-14T01:30:04.506Z] Copying: 874/1024 [MB] (15 MBps) [2024-12-14T01:30:05.448Z] Copying: 892/1024 [MB] (17 MBps) [2024-12-14T01:30:06.391Z] Copying: 907/1024 [MB] (15 MBps) [2024-12-14T01:30:07.778Z] Copying: 918/1024 [MB] (10 MBps) [2024-12-14T01:30:08.350Z] Copying: 949/1024 [MB] (31 MBps) [2024-12-14T01:30:09.376Z] Copying: 962/1024 [MB] (12 MBps) [2024-12-14T01:30:10.761Z] Copying: 973/1024 [MB] (11 MBps) [2024-12-14T01:30:11.705Z] Copying: 988/1024 [MB] (15 MBps) [2024-12-14T01:30:11.968Z] Copying: 1009/1024 [MB] (20 MBps) [2024-12-14T01:30:11.968Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-14 01:30:11.956647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.356 [2024-12-14 01:30:11.956705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:38.356 [2024-12-14 01:30:11.956721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:38.356 [2024-12-14 01:30:11.956735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.356 [2024-12-14 01:30:11.956757] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:38.356 [2024-12-14 01:30:11.957531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.356 [2024-12-14 01:30:11.957569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:38.356 [2024-12-14 01:30:11.957580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:30:38.356 [2024-12-14 01:30:11.957588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.356 [2024-12-14 01:30:11.960234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.356 [2024-12-14 01:30:11.960281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:38.356 [2024-12-14 01:30:11.960292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.621 ms 00:30:38.356 [2024-12-14 01:30:11.960300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.356 [2024-12-14 01:30:11.960331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.356 [2024-12-14 01:30:11.960339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:38.356 [2024-12-14 01:30:11.960348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:38.356 [2024-12-14 01:30:11.960355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.356 [2024-12-14 01:30:11.960409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.356 [2024-12-14 01:30:11.960418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:38.356 [2024-12-14 01:30:11.960427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:30:38.356 [2024-12-14 01:30:11.960435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.356 [2024-12-14 01:30:11.960459] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:38.356 [2024-12-14 01:30:11.960475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:38.356 [2024-12-14 01:30:11.960485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:38.356 [2024-12-14 01:30:11.960493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:38.356 [2024-12-14 01:30:11.960500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:38.356 [2024-12-14 01:30:11.960508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:38.356 [2024-12-14 01:30:11.960515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:38.356 [2024-12-14 01:30:11.960522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:38.356 [2024-12-14 01:30:11.960530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:38.356 [2024-12-14 01:30:11.960537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:38.356 [2024-12-14 01:30:11.960544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:38.356 [2024-12-14 01:30:11.960551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:38.356 [2024-12-14 01:30:11.960558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.960994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:38.357 [2024-12-14 01:30:11.961223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:38.358 [2024-12-14 01:30:11.961231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:38.358 [2024-12-14 01:30:11.961239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:38.358 [2024-12-14 01:30:11.961246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:38.358 [2024-12-14 01:30:11.961262] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:38.358 [2024-12-14 01:30:11.961269] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2cfa1acf-32a7-4ab3-bb7b-0c6c0a7a7dea 00:30:38.358 [2024-12-14 01:30:11.961278] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:38.358 [2024-12-14 01:30:11.961287] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:38.358 [2024-12-14 01:30:11.961299] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:38.358 [2024-12-14 01:30:11.961307] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:38.358 [2024-12-14 01:30:11.961314] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:38.358 [2024-12-14 01:30:11.961321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:38.358 [2024-12-14 01:30:11.961333] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:38.358 [2024-12-14 01:30:11.961340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:38.358 [2024-12-14 01:30:11.961347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:38.358 [2024-12-14 01:30:11.961353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.358 [2024-12-14 01:30:11.961364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:38.358 [2024-12-14 01:30:11.961372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.895 ms 00:30:38.358 [2024-12-14 01:30:11.961382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.358 [2024-12-14 01:30:11.963657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.358 [2024-12-14 01:30:11.963694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:38.358 [2024-12-14 01:30:11.963704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.260 ms 00:30:38.358 [2024-12-14 01:30:11.963718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.358 [2024-12-14 01:30:11.963832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.358 [2024-12-14 01:30:11.963846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:38.358 [2024-12-14 01:30:11.963855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:30:38.358 [2024-12-14 01:30:11.963867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.620 [2024-12-14 01:30:11.971216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.620 [2024-12-14 01:30:11.971265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:38.620 [2024-12-14 01:30:11.971275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.620 [2024-12-14 01:30:11.971284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.620 [2024-12-14 01:30:11.971362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.620 [2024-12-14 01:30:11.971375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:38.620 [2024-12-14 01:30:11.971386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.620 [2024-12-14 01:30:11.971394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.620 [2024-12-14 01:30:11.971442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.620 [2024-12-14 01:30:11.971451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:38.620 [2024-12-14 01:30:11.971459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.620 [2024-12-14 01:30:11.971466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.620 [2024-12-14 01:30:11.971481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.620 [2024-12-14 01:30:11.971489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:38.620 [2024-12-14 01:30:11.971499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.620 [2024-12-14 01:30:11.971507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.620 [2024-12-14 01:30:11.984684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.620 [2024-12-14 01:30:11.984735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:38.620 [2024-12-14 01:30:11.984745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.620 [2024-12-14 01:30:11.984753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.620 [2024-12-14 01:30:11.994509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.620 [2024-12-14 01:30:11.994570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:38.620 [2024-12-14 01:30:11.994583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.620 [2024-12-14 01:30:11.994592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.620 [2024-12-14 01:30:11.994652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.620 [2024-12-14 01:30:11.994663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:38.620 [2024-12-14 01:30:11.994672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.620 [2024-12-14 01:30:11.994680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.620 [2024-12-14 01:30:11.994707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.620 [2024-12-14 01:30:11.994716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:38.620 [2024-12-14 01:30:11.994725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.620 [2024-12-14 01:30:11.994735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.620 [2024-12-14 01:30:11.994788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.620 [2024-12-14 01:30:11.994797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:38.620 [2024-12-14 01:30:11.994806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.620 [2024-12-14 01:30:11.994815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.620 [2024-12-14 01:30:11.994839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.621 [2024-12-14 01:30:11.994848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:38.621 [2024-12-14 01:30:11.994856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.621 [2024-12-14 01:30:11.994864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.621 [2024-12-14 01:30:11.994913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.621 [2024-12-14 01:30:11.994923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:38.621 [2024-12-14 01:30:11.994931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.621 [2024-12-14 01:30:11.994938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.621 [2024-12-14 01:30:11.994982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.621 [2024-12-14 01:30:11.994992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:38.621 [2024-12-14 01:30:11.995000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.621 [2024-12-14 01:30:11.995013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.621 [2024-12-14 01:30:11.995145] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 38.483 ms, result 0 00:30:38.882 00:30:38.882 00:30:38.882 01:30:12 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:30:38.882 [2024-12-14 01:30:12.451336] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:38.882 [2024-12-14 01:30:12.451488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96702 ] 00:30:39.144 [2024-12-14 01:30:12.598952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.144 [2024-12-14 01:30:12.627360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.144 [2024-12-14 01:30:12.743897] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:39.144 [2024-12-14 01:30:12.743990] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:39.406 [2024-12-14 01:30:12.903932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.406 [2024-12-14 01:30:12.903985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:39.406 [2024-12-14 01:30:12.904000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:39.406 [2024-12-14 01:30:12.904008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.406 [2024-12-14 01:30:12.904063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.406 [2024-12-14 01:30:12.904074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:39.406 [2024-12-14 01:30:12.904083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:39.406 [2024-12-14 01:30:12.904097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.406 [2024-12-14 01:30:12.904125] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:39.406 [2024-12-14 01:30:12.904403] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:39.406 [2024-12-14 01:30:12.904419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.406 [2024-12-14 01:30:12.904430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:39.406 [2024-12-14 01:30:12.904441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:30:39.406 [2024-12-14 01:30:12.904449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.406 [2024-12-14 01:30:12.904731] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:39.406 [2024-12-14 01:30:12.904757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.406 [2024-12-14 01:30:12.904766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:39.406 [2024-12-14 01:30:12.904775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:30:39.406 [2024-12-14 01:30:12.904787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.406 [2024-12-14 01:30:12.904845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.406 [2024-12-14 01:30:12.904855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:39.406 [2024-12-14 01:30:12.904863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:30:39.406 [2024-12-14 01:30:12.904875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.406 [2024-12-14 01:30:12.905203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.406 [2024-12-14 01:30:12.905215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:39.406 [2024-12-14 01:30:12.905224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:30:39.406 [2024-12-14 01:30:12.905235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.406 [2024-12-14 01:30:12.905316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.406 [2024-12-14 01:30:12.905326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:39.406 [2024-12-14 01:30:12.905334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:30:39.406 [2024-12-14 01:30:12.905342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.406 [2024-12-14 01:30:12.905369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.407 [2024-12-14 01:30:12.905377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:39.407 [2024-12-14 01:30:12.905385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:39.407 [2024-12-14 01:30:12.905397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.407 [2024-12-14 01:30:12.905422] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:39.407 [2024-12-14 01:30:12.907508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.407 [2024-12-14 01:30:12.907551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:39.407 [2024-12-14 01:30:12.907562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.090 ms 00:30:39.407 [2024-12-14 01:30:12.907569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.407 [2024-12-14 01:30:12.907607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.407 [2024-12-14 01:30:12.907615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:39.407 [2024-12-14 01:30:12.907654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:39.407 [2024-12-14 01:30:12.907662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.407 [2024-12-14 01:30:12.907702] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:39.407 [2024-12-14 01:30:12.907724] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:39.407 [2024-12-14 01:30:12.907765] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:39.407 [2024-12-14 01:30:12.907784] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:39.407 [2024-12-14 01:30:12.907888] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:39.407 [2024-12-14 01:30:12.907898] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:39.407 [2024-12-14 01:30:12.907908] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:39.407 [2024-12-14 01:30:12.907922] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:39.407 [2024-12-14 01:30:12.907933] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:39.407 [2024-12-14 01:30:12.907942] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:39.407 [2024-12-14 01:30:12.907950] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:39.407 [2024-12-14 01:30:12.907957] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:39.407 [2024-12-14 01:30:12.907965] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:39.407 [2024-12-14 01:30:12.907973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.407 [2024-12-14 01:30:12.907980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:39.407 [2024-12-14 01:30:12.907988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:30:39.407 [2024-12-14 01:30:12.907996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.407 [2024-12-14 01:30:12.908100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.407 [2024-12-14 01:30:12.908113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:39.407 [2024-12-14 01:30:12.908121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:39.407 [2024-12-14 01:30:12.908132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.407 [2024-12-14 01:30:12.908238] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:39.407 [2024-12-14 01:30:12.908265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:39.407 [2024-12-14 01:30:12.908277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:39.407 [2024-12-14 01:30:12.908286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:39.407 [2024-12-14 01:30:12.908296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:39.407 [2024-12-14 01:30:12.908306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:39.407 [2024-12-14 01:30:12.908314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:39.407 [2024-12-14 01:30:12.908323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:39.407 [2024-12-14 01:30:12.908331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:39.407 [2024-12-14 01:30:12.908339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:39.407 [2024-12-14 01:30:12.908347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:39.407 [2024-12-14 01:30:12.908355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:39.407 [2024-12-14 01:30:12.908363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:39.407 [2024-12-14 01:30:12.908371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:39.407 [2024-12-14 01:30:12.908379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:39.407 [2024-12-14 01:30:12.908387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:39.407 [2024-12-14 01:30:12.908395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:39.407 [2024-12-14 01:30:12.908404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:39.407 [2024-12-14 01:30:12.908414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:39.407 [2024-12-14 01:30:12.908423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:39.407 [2024-12-14 01:30:12.908430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:39.407 [2024-12-14 01:30:12.908439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:39.407 [2024-12-14 01:30:12.908446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:39.407 [2024-12-14 01:30:12.908455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:39.407 [2024-12-14 01:30:12.908462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:39.407 [2024-12-14 01:30:12.908469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:39.407 [2024-12-14 01:30:12.908476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:39.407 [2024-12-14 01:30:12.908483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:39.407 [2024-12-14 01:30:12.908491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:39.407 [2024-12-14 01:30:12.908499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:39.407 [2024-12-14 01:30:12.908506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:39.407 [2024-12-14 01:30:12.908514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:39.407 [2024-12-14 01:30:12.908521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:39.407 [2024-12-14 01:30:12.908528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:39.407 [2024-12-14 01:30:12.908541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:39.407 [2024-12-14 01:30:12.908548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:39.407 [2024-12-14 01:30:12.908555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:39.407 [2024-12-14 01:30:12.908565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:39.407 [2024-12-14 01:30:12.908575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:39.407 [2024-12-14 01:30:12.908582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:39.407 [2024-12-14 01:30:12.908590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:39.407 [2024-12-14 01:30:12.908597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:39.407 [2024-12-14 01:30:12.908605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:39.407 [2024-12-14 01:30:12.908612] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:39.407 [2024-12-14 01:30:12.908637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:39.407 [2024-12-14 01:30:12.908650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:39.407 [2024-12-14 01:30:12.908658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:39.407 [2024-12-14 01:30:12.908671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:39.407 [2024-12-14 01:30:12.908679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:39.407 [2024-12-14 01:30:12.908687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:39.407 [2024-12-14 01:30:12.908698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:39.407 [2024-12-14 01:30:12.908705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:39.407 [2024-12-14 01:30:12.908713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:39.407 [2024-12-14 01:30:12.908722] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:39.407 [2024-12-14 01:30:12.908731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:39.407 [2024-12-14 01:30:12.908740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:39.407 [2024-12-14 01:30:12.908747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:39.407 [2024-12-14 01:30:12.908754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:39.407 [2024-12-14 01:30:12.908763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:39.407 [2024-12-14 01:30:12.908770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:39.407 [2024-12-14 01:30:12.908778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:39.407 [2024-12-14 01:30:12.908786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:39.407 [2024-12-14 01:30:12.908793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:39.407 [2024-12-14 01:30:12.908801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:39.407 [2024-12-14 01:30:12.908807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:39.407 [2024-12-14 01:30:12.908815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:39.407 [2024-12-14 01:30:12.908829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:39.407 [2024-12-14 01:30:12.908837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:39.407 [2024-12-14 01:30:12.908844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:39.407 [2024-12-14 01:30:12.908854] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:39.408 [2024-12-14 01:30:12.908862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:39.408 [2024-12-14 01:30:12.908871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:39.408 [2024-12-14 01:30:12.908879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:39.408 [2024-12-14 01:30:12.908886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:39.408 [2024-12-14 01:30:12.908895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:39.408 [2024-12-14 01:30:12.908903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.408 [2024-12-14 01:30:12.908911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:39.408 [2024-12-14 01:30:12.908919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:30:39.408 [2024-12-14 01:30:12.908926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.408 [2024-12-14 01:30:12.918603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.408 [2024-12-14 01:30:12.918659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:39.408 [2024-12-14 01:30:12.918669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.633 ms 00:30:39.408 [2024-12-14 01:30:12.918677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.408 [2024-12-14 01:30:12.918760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.408 [2024-12-14 01:30:12.918769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:39.408 [2024-12-14 01:30:12.918784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:39.408 [2024-12-14 01:30:12.918791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.408 [2024-12-14 01:30:12.938052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.408 [2024-12-14 01:30:12.938104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:39.408 [2024-12-14 01:30:12.938117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.202 ms 00:30:39.408 [2024-12-14 01:30:12.938126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.408 [2024-12-14 01:30:12.938173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.408 [2024-12-14 01:30:12.938185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:39.408 [2024-12-14 01:30:12.938195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:39.408 [2024-12-14 01:30:12.938203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.408 [2024-12-14 01:30:12.938319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.408 [2024-12-14 01:30:12.938336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:39.408 [2024-12-14 01:30:12.938346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:39.408 [2024-12-14 01:30:12.938354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.408 [2024-12-14 01:30:12.938481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.408 [2024-12-14 01:30:12.938491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:39.408 [2024-12-14 01:30:12.938503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:30:39.408 [2024-12-14 01:30:12.938514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.408 [2024-12-14 01:30:12.946378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.408 [2024-12-14 01:30:12.946423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:39.408 [2024-12-14 01:30:12.946438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.844 ms 00:30:39.408 [2024-12-14 01:30:12.946445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.408 [2024-12-14 01:30:12.946560] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:39.408 [2024-12-14 01:30:12.946573] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:39.408 [2024-12-14 01:30:12.946586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.408 [2024-12-14 01:30:12.946595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:39.408 [2024-12-14 01:30:12.946605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:30:39.408 [2024-12-14 01:30:12.946612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.408 [2024-12-14 01:30:12.958922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.408 [2024-12-14 01:30:12.958966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:39.408 [2024-12-14 01:30:12.958977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.275 ms 00:30:39.408 [2024-12-14 01:30:12.958988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.408 [2024-12-14 01:30:12.959116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.408 [2024-12-14 01:30:12.959125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:39.408 [2024-12-14 01:30:12.959139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:30:39.408 [2024-12-14 01:30:12.959146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.408 [2024-12-14 01:30:12.959198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.408 [2024-12-14 01:30:12.959207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:39.408 [2024-12-14 01:30:12.959216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:30:39.408 [2024-12-14 01:30:12.959223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.408 [2024-12-14 01:30:12.959523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.408 [2024-12-14 01:30:12.959533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:39.408 [2024-12-14 01:30:12.959548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:30:39.408 [2024-12-14 01:30:12.959559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.408 [2024-12-14 01:30:12.959578] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:39.408 [2024-12-14 01:30:12.959588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.408 [2024-12-14 01:30:12.959598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:39.408 [2024-12-14 01:30:12.959607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:39.408 [2024-12-14 01:30:12.959614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.408 [2024-12-14 01:30:12.970411] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:39.408 [2024-12-14 01:30:12.970557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.408 [2024-12-14 01:30:12.970568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:39.408 [2024-12-14 01:30:12.970577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.896 ms 00:30:39.408 [2024-12-14 01:30:12.970589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.408 [2024-12-14 01:30:12.973011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.408 [2024-12-14 01:30:12.973038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:39.408 [2024-12-14 01:30:12.973048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.397 ms 00:30:39.408 [2024-12-14 01:30:12.973056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.408 [2024-12-14 01:30:12.973147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.408 [2024-12-14 01:30:12.973157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:39.408 [2024-12-14 01:30:12.973167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:39.408 [2024-12-14 01:30:12.973178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.408 [2024-12-14 01:30:12.973199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.408 [2024-12-14 01:30:12.973209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:39.408 [2024-12-14 01:30:12.973217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:39.408 [2024-12-14 01:30:12.973224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.408 [2024-12-14 01:30:12.973256] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:39.408 [2024-12-14 01:30:12.973268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.408 [2024-12-14 01:30:12.973276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:39.408 [2024-12-14 01:30:12.973284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:39.408 [2024-12-14 01:30:12.973291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.408 [2024-12-14 01:30:12.979498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.408 [2024-12-14 01:30:12.979541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:39.408 [2024-12-14 01:30:12.979560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.182 ms 00:30:39.408 [2024-12-14 01:30:12.979568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.408 [2024-12-14 01:30:12.979673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.408 [2024-12-14 01:30:12.979684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:39.408 [2024-12-14 01:30:12.979694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:30:39.408 [2024-12-14 01:30:12.979706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.408 [2024-12-14 01:30:12.981422] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 77.056 ms, result 0 00:30:40.796  [2024-12-14T01:30:15.420Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-14T01:30:16.362Z] Copying: 38/1024 [MB] (21 MBps) [2024-12-14T01:30:17.306Z] Copying: 57/1024 [MB] (18 MBps) [2024-12-14T01:30:18.249Z] Copying: 72/1024 [MB] (15 MBps) [2024-12-14T01:30:19.194Z] Copying: 93/1024 [MB] (21 MBps) [2024-12-14T01:30:20.580Z] Copying: 107/1024 [MB] (14 MBps) [2024-12-14T01:30:21.523Z] Copying: 121/1024 [MB] (13 MBps) [2024-12-14T01:30:22.467Z] Copying: 139/1024 [MB] (17 MBps) [2024-12-14T01:30:23.413Z] Copying: 158/1024 [MB] (19 MBps) [2024-12-14T01:30:24.355Z] Copying: 171/1024 [MB] (12 MBps) [2024-12-14T01:30:25.298Z] Copying: 198/1024 [MB] (27 MBps) [2024-12-14T01:30:26.238Z] Copying: 211/1024 [MB] (13 MBps) [2024-12-14T01:30:27.180Z] Copying: 225/1024 [MB] (13 MBps) [2024-12-14T01:30:28.566Z] Copying: 245/1024 [MB] (20 MBps) [2024-12-14T01:30:29.509Z] Copying: 262/1024 [MB] (17 MBps) [2024-12-14T01:30:30.451Z] Copying: 282/1024 [MB] (19 MBps) [2024-12-14T01:30:31.391Z] Copying: 297/1024 [MB] (14 MBps) [2024-12-14T01:30:32.331Z] Copying: 308/1024 [MB] (10 MBps) [2024-12-14T01:30:33.274Z] Copying: 322/1024 [MB] (13 MBps) [2024-12-14T01:30:34.217Z] Copying: 333/1024 [MB] (11 MBps) [2024-12-14T01:30:35.601Z] Copying: 347/1024 [MB] (14 MBps) [2024-12-14T01:30:36.173Z] Copying: 361/1024 [MB] (13 MBps) [2024-12-14T01:30:37.559Z] Copying: 374/1024 [MB] (13 MBps) [2024-12-14T01:30:38.503Z] Copying: 385/1024 [MB] (10 MBps) [2024-12-14T01:30:39.446Z] Copying: 398/1024 [MB] (12 MBps) [2024-12-14T01:30:40.428Z] Copying: 415/1024 [MB] (17 MBps) [2024-12-14T01:30:41.372Z] Copying: 431/1024 [MB] (15 MBps) [2024-12-14T01:30:42.316Z] Copying: 452/1024 [MB] (20 MBps) [2024-12-14T01:30:43.260Z] Copying: 469/1024 [MB] (17 MBps) [2024-12-14T01:30:44.202Z] Copying: 489/1024 [MB] (19 MBps) [2024-12-14T01:30:45.588Z] Copying: 503/1024 [MB] (13 MBps) [2024-12-14T01:30:46.531Z] Copying: 513/1024 [MB] (10 MBps) [2024-12-14T01:30:47.475Z] Copying: 524/1024 [MB] (10 MBps) [2024-12-14T01:30:48.418Z] Copying: 534/1024 [MB] (10 MBps) [2024-12-14T01:30:49.362Z] Copying: 545/1024 [MB] (10 MBps) [2024-12-14T01:30:50.306Z] Copying: 555/1024 [MB] (10 MBps) [2024-12-14T01:30:51.249Z] Copying: 570/1024 [MB] (15 MBps) [2024-12-14T01:30:52.194Z] Copying: 590/1024 [MB] (20 MBps) [2024-12-14T01:30:53.581Z] Copying: 602/1024 [MB] (12 MBps) [2024-12-14T01:30:54.527Z] Copying: 614/1024 [MB] (11 MBps) [2024-12-14T01:30:55.471Z] Copying: 627/1024 [MB] (13 MBps) [2024-12-14T01:30:56.413Z] Copying: 641/1024 [MB] (13 MBps) [2024-12-14T01:30:57.357Z] Copying: 656/1024 [MB] (15 MBps) [2024-12-14T01:30:58.300Z] Copying: 671/1024 [MB] (15 MBps) [2024-12-14T01:30:59.236Z] Copying: 687/1024 [MB] (16 MBps) [2024-12-14T01:31:00.170Z] Copying: 712/1024 [MB] (25 MBps) [2024-12-14T01:31:01.543Z] Copying: 731/1024 [MB] (18 MBps) [2024-12-14T01:31:02.474Z] Copying: 757/1024 [MB] (25 MBps) [2024-12-14T01:31:03.405Z] Copying: 776/1024 [MB] (19 MBps) [2024-12-14T01:31:04.337Z] Copying: 804/1024 [MB] (28 MBps) [2024-12-14T01:31:05.273Z] Copying: 826/1024 [MB] (21 MBps) [2024-12-14T01:31:06.239Z] Copying: 850/1024 [MB] (24 MBps) [2024-12-14T01:31:07.172Z] Copying: 870/1024 [MB] (20 MBps) [2024-12-14T01:31:08.547Z] Copying: 888/1024 [MB] (17 MBps) [2024-12-14T01:31:09.481Z] Copying: 911/1024 [MB] (23 MBps) [2024-12-14T01:31:10.416Z] Copying: 928/1024 [MB] (16 MBps) [2024-12-14T01:31:11.350Z] Copying: 944/1024 [MB] (16 MBps) [2024-12-14T01:31:12.283Z] Copying: 960/1024 [MB] (16 MBps) [2024-12-14T01:31:13.226Z] Copying: 977/1024 [MB] (16 MBps) [2024-12-14T01:31:14.170Z] Copying: 989/1024 [MB] (12 MBps) [2024-12-14T01:31:15.556Z] Copying: 1000/1024 [MB] (11 MBps) [2024-12-14T01:31:16.128Z] Copying: 1011/1024 [MB] (10 MBps) [2024-12-14T01:31:16.702Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-14 01:31:16.394699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.090 [2024-12-14 01:31:16.394786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:43.090 [2024-12-14 01:31:16.394803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:43.090 [2024-12-14 01:31:16.394813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.090 [2024-12-14 01:31:16.394851] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:43.090 [2024-12-14 01:31:16.395716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.090 [2024-12-14 01:31:16.395760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:43.090 [2024-12-14 01:31:16.395773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.847 ms 00:31:43.090 [2024-12-14 01:31:16.395784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.090 [2024-12-14 01:31:16.396039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.090 [2024-12-14 01:31:16.396060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:43.090 [2024-12-14 01:31:16.396070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:31:43.090 [2024-12-14 01:31:16.396079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.090 [2024-12-14 01:31:16.396115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.090 [2024-12-14 01:31:16.396125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:43.090 [2024-12-14 01:31:16.396135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:43.090 [2024-12-14 01:31:16.396143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.090 [2024-12-14 01:31:16.396208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.090 [2024-12-14 01:31:16.396226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:43.090 [2024-12-14 01:31:16.396235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:43.090 [2024-12-14 01:31:16.396242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.090 [2024-12-14 01:31:16.396260] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:43.090 [2024-12-14 01:31:16.396276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:43.090 [2024-12-14 01:31:16.396689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.396995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.397003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.397011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.397029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.397039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.397046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.397054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.397061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.397070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.397078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.397085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.397093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.397100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.397108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.397118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.397125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:43.091 [2024-12-14 01:31:16.397141] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:43.091 [2024-12-14 01:31:16.397149] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2cfa1acf-32a7-4ab3-bb7b-0c6c0a7a7dea 00:31:43.091 [2024-12-14 01:31:16.397159] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:43.091 [2024-12-14 01:31:16.397168] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:43.091 [2024-12-14 01:31:16.397176] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:43.091 [2024-12-14 01:31:16.397189] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:43.091 [2024-12-14 01:31:16.397196] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:43.091 [2024-12-14 01:31:16.397205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:43.091 [2024-12-14 01:31:16.397213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:43.091 [2024-12-14 01:31:16.397221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:43.091 [2024-12-14 01:31:16.397230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:43.091 [2024-12-14 01:31:16.397240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.091 [2024-12-14 01:31:16.397248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:43.091 [2024-12-14 01:31:16.397259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:31:43.091 [2024-12-14 01:31:16.397267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.091 [2024-12-14 01:31:16.399765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.091 [2024-12-14 01:31:16.399816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:43.091 [2024-12-14 01:31:16.399830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.479 ms 00:31:43.091 [2024-12-14 01:31:16.399839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.091 [2024-12-14 01:31:16.399962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.091 [2024-12-14 01:31:16.399977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:43.091 [2024-12-14 01:31:16.399988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:31:43.091 [2024-12-14 01:31:16.400005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.091 [2024-12-14 01:31:16.408476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.091 [2024-12-14 01:31:16.408532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:43.091 [2024-12-14 01:31:16.408543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.091 [2024-12-14 01:31:16.408551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.091 [2024-12-14 01:31:16.408665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.091 [2024-12-14 01:31:16.408683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:43.091 [2024-12-14 01:31:16.408692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.091 [2024-12-14 01:31:16.408705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.091 [2024-12-14 01:31:16.408777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.091 [2024-12-14 01:31:16.408789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:43.091 [2024-12-14 01:31:16.408802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.091 [2024-12-14 01:31:16.408810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.091 [2024-12-14 01:31:16.408829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.091 [2024-12-14 01:31:16.408839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:43.091 [2024-12-14 01:31:16.408850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.091 [2024-12-14 01:31:16.408860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.091 [2024-12-14 01:31:16.424695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.091 [2024-12-14 01:31:16.424752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:43.091 [2024-12-14 01:31:16.424764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.091 [2024-12-14 01:31:16.424773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.091 [2024-12-14 01:31:16.438566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.091 [2024-12-14 01:31:16.438652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:43.091 [2024-12-14 01:31:16.438670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.091 [2024-12-14 01:31:16.438678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.091 [2024-12-14 01:31:16.438742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.091 [2024-12-14 01:31:16.438751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:43.091 [2024-12-14 01:31:16.438760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.091 [2024-12-14 01:31:16.438770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.091 [2024-12-14 01:31:16.438807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.092 [2024-12-14 01:31:16.438818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:43.092 [2024-12-14 01:31:16.438827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.092 [2024-12-14 01:31:16.438839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.092 [2024-12-14 01:31:16.438902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.092 [2024-12-14 01:31:16.438913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:43.092 [2024-12-14 01:31:16.438922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.092 [2024-12-14 01:31:16.438930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.092 [2024-12-14 01:31:16.438958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.092 [2024-12-14 01:31:16.438970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:43.092 [2024-12-14 01:31:16.438979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.092 [2024-12-14 01:31:16.438988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.092 [2024-12-14 01:31:16.439034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.092 [2024-12-14 01:31:16.439046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:43.092 [2024-12-14 01:31:16.439055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.092 [2024-12-14 01:31:16.439063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.092 [2024-12-14 01:31:16.439108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.092 [2024-12-14 01:31:16.439120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:43.092 [2024-12-14 01:31:16.439128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.092 [2024-12-14 01:31:16.439140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.092 [2024-12-14 01:31:16.439277] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 44.543 ms, result 0 00:31:43.092 00:31:43.092 00:31:43.092 01:31:16 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:45.638 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:45.638 01:31:18 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:31:45.638 [2024-12-14 01:31:18.948214] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:31:45.638 [2024-12-14 01:31:18.948324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97381 ] 00:31:45.638 [2024-12-14 01:31:19.089256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.638 [2024-12-14 01:31:19.105653] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.638 [2024-12-14 01:31:19.187202] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:45.638 [2024-12-14 01:31:19.187259] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:45.898 [2024-12-14 01:31:19.334121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.898 [2024-12-14 01:31:19.334156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:45.898 [2024-12-14 01:31:19.334166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:45.898 [2024-12-14 01:31:19.334175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.898 [2024-12-14 01:31:19.334211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.898 [2024-12-14 01:31:19.334220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:45.898 [2024-12-14 01:31:19.334229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:31:45.898 [2024-12-14 01:31:19.334238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.898 [2024-12-14 01:31:19.334255] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:45.898 [2024-12-14 01:31:19.334431] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:45.898 [2024-12-14 01:31:19.334442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.898 [2024-12-14 01:31:19.334452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:45.898 [2024-12-14 01:31:19.334460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:31:45.898 [2024-12-14 01:31:19.334465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.898 [2024-12-14 01:31:19.334673] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:45.898 [2024-12-14 01:31:19.334690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.898 [2024-12-14 01:31:19.334696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:45.898 [2024-12-14 01:31:19.334705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:31:45.898 [2024-12-14 01:31:19.334714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.898 [2024-12-14 01:31:19.334750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.898 [2024-12-14 01:31:19.334757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:45.898 [2024-12-14 01:31:19.334763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:31:45.898 [2024-12-14 01:31:19.334769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.898 [2024-12-14 01:31:19.334947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.898 [2024-12-14 01:31:19.334956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:45.898 [2024-12-14 01:31:19.334962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:31:45.898 [2024-12-14 01:31:19.334967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.898 [2024-12-14 01:31:19.335026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.898 [2024-12-14 01:31:19.335034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:45.898 [2024-12-14 01:31:19.335040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:31:45.898 [2024-12-14 01:31:19.335045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.898 [2024-12-14 01:31:19.335061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.898 [2024-12-14 01:31:19.335067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:45.898 [2024-12-14 01:31:19.335073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:45.898 [2024-12-14 01:31:19.335079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.898 [2024-12-14 01:31:19.335091] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:45.898 [2024-12-14 01:31:19.336318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.899 [2024-12-14 01:31:19.336340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:45.899 [2024-12-14 01:31:19.336347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.230 ms 00:31:45.899 [2024-12-14 01:31:19.336353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.899 [2024-12-14 01:31:19.336380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.899 [2024-12-14 01:31:19.336387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:45.899 [2024-12-14 01:31:19.336397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:45.899 [2024-12-14 01:31:19.336403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.899 [2024-12-14 01:31:19.336416] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:45.899 [2024-12-14 01:31:19.336432] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:45.899 [2024-12-14 01:31:19.336461] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:45.899 [2024-12-14 01:31:19.336475] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:45.899 [2024-12-14 01:31:19.336559] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:45.899 [2024-12-14 01:31:19.336570] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:45.899 [2024-12-14 01:31:19.336578] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:45.899 [2024-12-14 01:31:19.336586] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:45.899 [2024-12-14 01:31:19.336595] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:45.899 [2024-12-14 01:31:19.336604] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:45.899 [2024-12-14 01:31:19.336610] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:45.899 [2024-12-14 01:31:19.336616] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:45.899 [2024-12-14 01:31:19.336632] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:45.899 [2024-12-14 01:31:19.336638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.899 [2024-12-14 01:31:19.336643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:45.899 [2024-12-14 01:31:19.336650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:31:45.899 [2024-12-14 01:31:19.336655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.899 [2024-12-14 01:31:19.336718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.899 [2024-12-14 01:31:19.336728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:45.899 [2024-12-14 01:31:19.336738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:45.899 [2024-12-14 01:31:19.336744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.899 [2024-12-14 01:31:19.336815] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:45.899 [2024-12-14 01:31:19.336826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:45.899 [2024-12-14 01:31:19.336833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:45.899 [2024-12-14 01:31:19.336842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:45.899 [2024-12-14 01:31:19.336854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:45.899 [2024-12-14 01:31:19.336859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:45.899 [2024-12-14 01:31:19.336864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:45.899 [2024-12-14 01:31:19.336869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:45.899 [2024-12-14 01:31:19.336875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:45.899 [2024-12-14 01:31:19.336880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:45.899 [2024-12-14 01:31:19.336885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:45.899 [2024-12-14 01:31:19.336890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:45.899 [2024-12-14 01:31:19.336895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:45.899 [2024-12-14 01:31:19.336900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:45.899 [2024-12-14 01:31:19.336906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:45.899 [2024-12-14 01:31:19.336911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:45.899 [2024-12-14 01:31:19.336916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:45.899 [2024-12-14 01:31:19.336921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:45.899 [2024-12-14 01:31:19.336926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:45.899 [2024-12-14 01:31:19.336931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:45.899 [2024-12-14 01:31:19.336938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:45.899 [2024-12-14 01:31:19.336943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:45.899 [2024-12-14 01:31:19.336948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:45.899 [2024-12-14 01:31:19.336953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:45.899 [2024-12-14 01:31:19.336958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:45.899 [2024-12-14 01:31:19.336963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:45.899 [2024-12-14 01:31:19.336968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:45.899 [2024-12-14 01:31:19.336974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:45.899 [2024-12-14 01:31:19.336980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:45.899 [2024-12-14 01:31:19.336986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:45.899 [2024-12-14 01:31:19.336993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:45.899 [2024-12-14 01:31:19.336999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:45.899 [2024-12-14 01:31:19.337005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:45.899 [2024-12-14 01:31:19.337011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:45.899 [2024-12-14 01:31:19.337016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:45.899 [2024-12-14 01:31:19.337022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:45.899 [2024-12-14 01:31:19.337031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:45.899 [2024-12-14 01:31:19.337037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:45.899 [2024-12-14 01:31:19.337042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:45.899 [2024-12-14 01:31:19.337048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:45.899 [2024-12-14 01:31:19.337054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:45.899 [2024-12-14 01:31:19.337060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:45.899 [2024-12-14 01:31:19.337067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:45.899 [2024-12-14 01:31:19.337073] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:45.899 [2024-12-14 01:31:19.337079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:45.899 [2024-12-14 01:31:19.337085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:45.899 [2024-12-14 01:31:19.337094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:45.899 [2024-12-14 01:31:19.337100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:45.899 [2024-12-14 01:31:19.337106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:45.899 [2024-12-14 01:31:19.337112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:45.899 [2024-12-14 01:31:19.337117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:45.899 [2024-12-14 01:31:19.337124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:45.899 [2024-12-14 01:31:19.337132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:45.899 [2024-12-14 01:31:19.337142] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:45.899 [2024-12-14 01:31:19.337149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:45.899 [2024-12-14 01:31:19.337159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:45.899 [2024-12-14 01:31:19.337165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:45.899 [2024-12-14 01:31:19.337172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:45.899 [2024-12-14 01:31:19.337178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:45.899 [2024-12-14 01:31:19.337184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:45.899 [2024-12-14 01:31:19.337190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:45.899 [2024-12-14 01:31:19.337196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:45.899 [2024-12-14 01:31:19.337203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:45.899 [2024-12-14 01:31:19.337210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:45.899 [2024-12-14 01:31:19.337216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:45.899 [2024-12-14 01:31:19.337223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:45.899 [2024-12-14 01:31:19.337236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:45.899 [2024-12-14 01:31:19.337242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:45.899 [2024-12-14 01:31:19.337250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:45.899 [2024-12-14 01:31:19.337260] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:45.899 [2024-12-14 01:31:19.337267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:45.899 [2024-12-14 01:31:19.337279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:45.899 [2024-12-14 01:31:19.337286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:45.900 [2024-12-14 01:31:19.337293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:45.900 [2024-12-14 01:31:19.337299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:45.900 [2024-12-14 01:31:19.337306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.900 [2024-12-14 01:31:19.337312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:45.900 [2024-12-14 01:31:19.337318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:31:45.900 [2024-12-14 01:31:19.337324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.900 [2024-12-14 01:31:19.342639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.900 [2024-12-14 01:31:19.342659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:45.900 [2024-12-14 01:31:19.342667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.283 ms 00:31:45.900 [2024-12-14 01:31:19.342674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.900 [2024-12-14 01:31:19.342738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.900 [2024-12-14 01:31:19.342745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:45.900 [2024-12-14 01:31:19.342751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:31:45.900 [2024-12-14 01:31:19.342757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.900 [2024-12-14 01:31:19.368573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.900 [2024-12-14 01:31:19.368683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:45.900 [2024-12-14 01:31:19.368714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.781 ms 00:31:45.900 [2024-12-14 01:31:19.368735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.900 [2024-12-14 01:31:19.368806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.900 [2024-12-14 01:31:19.368840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:45.900 [2024-12-14 01:31:19.368871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:45.900 [2024-12-14 01:31:19.368891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.900 [2024-12-14 01:31:19.369126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.900 [2024-12-14 01:31:19.369178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:45.900 [2024-12-14 01:31:19.369211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:31:45.900 [2024-12-14 01:31:19.369235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.900 [2024-12-14 01:31:19.369542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.900 [2024-12-14 01:31:19.369586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:45.900 [2024-12-14 01:31:19.369609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:31:45.900 [2024-12-14 01:31:19.369731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.900 [2024-12-14 01:31:19.374536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.900 [2024-12-14 01:31:19.374558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:45.900 [2024-12-14 01:31:19.374572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.749 ms 00:31:45.900 [2024-12-14 01:31:19.374581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.900 [2024-12-14 01:31:19.374674] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:45.900 [2024-12-14 01:31:19.374684] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:45.900 [2024-12-14 01:31:19.374692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.900 [2024-12-14 01:31:19.374698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:45.900 [2024-12-14 01:31:19.374705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:45.900 [2024-12-14 01:31:19.374712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.900 [2024-12-14 01:31:19.383836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.900 [2024-12-14 01:31:19.383856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:45.900 [2024-12-14 01:31:19.383869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.112 ms 00:31:45.900 [2024-12-14 01:31:19.383878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.900 [2024-12-14 01:31:19.383962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.900 [2024-12-14 01:31:19.383969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:45.900 [2024-12-14 01:31:19.383975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:31:45.900 [2024-12-14 01:31:19.383983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.900 [2024-12-14 01:31:19.384016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.900 [2024-12-14 01:31:19.384026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:45.900 [2024-12-14 01:31:19.384032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:45.900 [2024-12-14 01:31:19.384038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.900 [2024-12-14 01:31:19.384255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.900 [2024-12-14 01:31:19.384268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:45.900 [2024-12-14 01:31:19.384275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:31:45.900 [2024-12-14 01:31:19.384280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.900 [2024-12-14 01:31:19.384293] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:45.900 [2024-12-14 01:31:19.384301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.900 [2024-12-14 01:31:19.384310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:45.900 [2024-12-14 01:31:19.384316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:45.900 [2024-12-14 01:31:19.384322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.900 [2024-12-14 01:31:19.390552] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:45.900 [2024-12-14 01:31:19.390653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.900 [2024-12-14 01:31:19.390661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:45.900 [2024-12-14 01:31:19.390671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.319 ms 00:31:45.900 [2024-12-14 01:31:19.390676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.900 [2024-12-14 01:31:19.392355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.900 [2024-12-14 01:31:19.392373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:45.900 [2024-12-14 01:31:19.392380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.663 ms 00:31:45.900 [2024-12-14 01:31:19.392386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.900 [2024-12-14 01:31:19.392437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.900 [2024-12-14 01:31:19.392447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:45.900 [2024-12-14 01:31:19.392454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:31:45.900 [2024-12-14 01:31:19.392459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.900 [2024-12-14 01:31:19.392477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.900 [2024-12-14 01:31:19.392484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:45.900 [2024-12-14 01:31:19.392490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:45.900 [2024-12-14 01:31:19.392496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.900 [2024-12-14 01:31:19.392519] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:45.900 [2024-12-14 01:31:19.392527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.900 [2024-12-14 01:31:19.392533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:45.900 [2024-12-14 01:31:19.392540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:45.900 [2024-12-14 01:31:19.392546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.900 [2024-12-14 01:31:19.395709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.900 [2024-12-14 01:31:19.395739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:45.900 [2024-12-14 01:31:19.395747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.147 ms 00:31:45.900 [2024-12-14 01:31:19.395753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.900 [2024-12-14 01:31:19.395803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.900 [2024-12-14 01:31:19.395810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:45.900 [2024-12-14 01:31:19.395816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:31:45.900 [2024-12-14 01:31:19.395821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.900 [2024-12-14 01:31:19.396875] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 62.438 ms, result 0 00:31:46.833  [2024-12-14T01:31:21.816Z] Copying: 27/1024 [MB] (27 MBps) [2024-12-14T01:31:22.750Z] Copying: 51/1024 [MB] (24 MBps) [2024-12-14T01:31:23.683Z] Copying: 82/1024 [MB] (30 MBps) [2024-12-14T01:31:24.618Z] Copying: 107/1024 [MB] (25 MBps) [2024-12-14T01:31:25.562Z] Copying: 130/1024 [MB] (22 MBps) [2024-12-14T01:31:26.507Z] Copying: 149/1024 [MB] (18 MBps) [2024-12-14T01:31:27.451Z] Copying: 173/1024 [MB] (24 MBps) [2024-12-14T01:31:28.838Z] Copying: 194/1024 [MB] (21 MBps) [2024-12-14T01:31:29.410Z] Copying: 213/1024 [MB] (18 MBps) [2024-12-14T01:31:30.799Z] Copying: 231/1024 [MB] (18 MBps) [2024-12-14T01:31:31.743Z] Copying: 250/1024 [MB] (18 MBps) [2024-12-14T01:31:32.761Z] Copying: 269/1024 [MB] (18 MBps) [2024-12-14T01:31:33.704Z] Copying: 288/1024 [MB] (18 MBps) [2024-12-14T01:31:34.648Z] Copying: 306/1024 [MB] (18 MBps) [2024-12-14T01:31:35.588Z] Copying: 332/1024 [MB] (26 MBps) [2024-12-14T01:31:36.531Z] Copying: 347/1024 [MB] (15 MBps) [2024-12-14T01:31:37.473Z] Copying: 368/1024 [MB] (20 MBps) [2024-12-14T01:31:38.416Z] Copying: 383/1024 [MB] (14 MBps) [2024-12-14T01:31:39.803Z] Copying: 400/1024 [MB] (17 MBps) [2024-12-14T01:31:40.749Z] Copying: 411/1024 [MB] (10 MBps) [2024-12-14T01:31:41.693Z] Copying: 421/1024 [MB] (10 MBps) [2024-12-14T01:31:42.638Z] Copying: 435/1024 [MB] (13 MBps) [2024-12-14T01:31:43.584Z] Copying: 445/1024 [MB] (10 MBps) [2024-12-14T01:31:44.528Z] Copying: 455/1024 [MB] (10 MBps) [2024-12-14T01:31:45.476Z] Copying: 466/1024 [MB] (10 MBps) [2024-12-14T01:31:46.420Z] Copying: 486/1024 [MB] (20 MBps) [2024-12-14T01:31:47.808Z] Copying: 498/1024 [MB] (11 MBps) [2024-12-14T01:31:48.751Z] Copying: 509/1024 [MB] (11 MBps) [2024-12-14T01:31:49.692Z] Copying: 530/1024 [MB] (21 MBps) [2024-12-14T01:31:50.634Z] Copying: 542/1024 [MB] (11 MBps) [2024-12-14T01:31:51.576Z] Copying: 554/1024 [MB] (11 MBps) [2024-12-14T01:31:52.519Z] Copying: 568/1024 [MB] (13 MBps) [2024-12-14T01:31:53.464Z] Copying: 588/1024 [MB] (20 MBps) [2024-12-14T01:31:54.851Z] Copying: 606/1024 [MB] (17 MBps) [2024-12-14T01:31:55.422Z] Copying: 623/1024 [MB] (17 MBps) [2024-12-14T01:31:56.808Z] Copying: 635/1024 [MB] (11 MBps) [2024-12-14T01:31:57.750Z] Copying: 647/1024 [MB] (12 MBps) [2024-12-14T01:31:58.757Z] Copying: 665/1024 [MB] (17 MBps) [2024-12-14T01:31:59.700Z] Copying: 679/1024 [MB] (13 MBps) [2024-12-14T01:32:00.642Z] Copying: 702/1024 [MB] (23 MBps) [2024-12-14T01:32:01.586Z] Copying: 727/1024 [MB] (25 MBps) [2024-12-14T01:32:02.528Z] Copying: 750/1024 [MB] (23 MBps) [2024-12-14T01:32:03.471Z] Copying: 777/1024 [MB] (26 MBps) [2024-12-14T01:32:04.414Z] Copying: 794/1024 [MB] (17 MBps) [2024-12-14T01:32:05.801Z] Copying: 812/1024 [MB] (17 MBps) [2024-12-14T01:32:06.743Z] Copying: 836/1024 [MB] (23 MBps) [2024-12-14T01:32:07.687Z] Copying: 860/1024 [MB] (24 MBps) [2024-12-14T01:32:08.630Z] Copying: 883/1024 [MB] (23 MBps) [2024-12-14T01:32:09.574Z] Copying: 908/1024 [MB] (24 MBps) [2024-12-14T01:32:10.518Z] Copying: 929/1024 [MB] (21 MBps) [2024-12-14T01:32:11.458Z] Copying: 951/1024 [MB] (21 MBps) [2024-12-14T01:32:12.844Z] Copying: 988/1024 [MB] (36 MBps) [2024-12-14T01:32:13.414Z] Copying: 1010/1024 [MB] (22 MBps) [2024-12-14T01:32:14.801Z] Copying: 1021/1024 [MB] (11 MBps) [2024-12-14T01:32:14.801Z] Copying: 1048400/1048576 [kB] (2008 kBps) [2024-12-14T01:32:14.801Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-14 01:32:14.628113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.189 [2024-12-14 01:32:14.628499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:41.189 [2024-12-14 01:32:14.628530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:41.189 [2024-12-14 01:32:14.628540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.189 [2024-12-14 01:32:14.630787] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:41.189 [2024-12-14 01:32:14.634064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.189 [2024-12-14 01:32:14.634117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:41.189 [2024-12-14 01:32:14.634130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.212 ms 00:32:41.189 [2024-12-14 01:32:14.634139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.189 [2024-12-14 01:32:14.645771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.189 [2024-12-14 01:32:14.645823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:41.189 [2024-12-14 01:32:14.645835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.722 ms 00:32:41.189 [2024-12-14 01:32:14.645844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.189 [2024-12-14 01:32:14.645875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.189 [2024-12-14 01:32:14.645885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:41.189 [2024-12-14 01:32:14.645894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:41.189 [2024-12-14 01:32:14.645902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.189 [2024-12-14 01:32:14.645964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.189 [2024-12-14 01:32:14.645978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:41.189 [2024-12-14 01:32:14.645987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:32:41.189 [2024-12-14 01:32:14.645999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.189 [2024-12-14 01:32:14.646015] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:41.189 [2024-12-14 01:32:14.646028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 125952 / 261120 wr_cnt: 1 state: open 00:32:41.189 [2024-12-14 01:32:14.646039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:41.189 [2024-12-14 01:32:14.646393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:41.190 [2024-12-14 01:32:14.646860] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:41.190 [2024-12-14 01:32:14.646872] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2cfa1acf-32a7-4ab3-bb7b-0c6c0a7a7dea 00:32:41.190 [2024-12-14 01:32:14.646882] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 125952 00:32:41.190 [2024-12-14 01:32:14.646889] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 125984 00:32:41.190 [2024-12-14 01:32:14.646897] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 125952 00:32:41.190 [2024-12-14 01:32:14.646906] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:32:41.190 [2024-12-14 01:32:14.646919] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:41.190 [2024-12-14 01:32:14.646927] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:41.190 [2024-12-14 01:32:14.646936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:41.190 [2024-12-14 01:32:14.646943] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:41.190 [2024-12-14 01:32:14.646949] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:41.190 [2024-12-14 01:32:14.646957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.190 [2024-12-14 01:32:14.646966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:41.190 [2024-12-14 01:32:14.646973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:32:41.190 [2024-12-14 01:32:14.646982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.190 [2024-12-14 01:32:14.649280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.190 [2024-12-14 01:32:14.649319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:41.190 [2024-12-14 01:32:14.649337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.281 ms 00:32:41.190 [2024-12-14 01:32:14.649350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.190 [2024-12-14 01:32:14.649467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.190 [2024-12-14 01:32:14.649476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:41.190 [2024-12-14 01:32:14.649486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:32:41.190 [2024-12-14 01:32:14.649493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.190 [2024-12-14 01:32:14.656940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:41.190 [2024-12-14 01:32:14.656995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:41.190 [2024-12-14 01:32:14.657005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:41.190 [2024-12-14 01:32:14.657013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.190 [2024-12-14 01:32:14.657074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:41.190 [2024-12-14 01:32:14.657083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:41.190 [2024-12-14 01:32:14.657092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:41.190 [2024-12-14 01:32:14.657099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.190 [2024-12-14 01:32:14.657143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:41.190 [2024-12-14 01:32:14.657153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:41.190 [2024-12-14 01:32:14.657165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:41.190 [2024-12-14 01:32:14.657173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.190 [2024-12-14 01:32:14.657190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:41.190 [2024-12-14 01:32:14.657199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:41.190 [2024-12-14 01:32:14.657207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:41.190 [2024-12-14 01:32:14.657214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.190 [2024-12-14 01:32:14.670307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:41.190 [2024-12-14 01:32:14.670366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:41.190 [2024-12-14 01:32:14.670377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:41.190 [2024-12-14 01:32:14.670385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.190 [2024-12-14 01:32:14.680963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:41.190 [2024-12-14 01:32:14.681013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:41.190 [2024-12-14 01:32:14.681025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:41.190 [2024-12-14 01:32:14.681034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.190 [2024-12-14 01:32:14.681091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:41.190 [2024-12-14 01:32:14.681101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:41.191 [2024-12-14 01:32:14.681109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:41.191 [2024-12-14 01:32:14.681121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.191 [2024-12-14 01:32:14.681162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:41.191 [2024-12-14 01:32:14.681174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:41.191 [2024-12-14 01:32:14.681182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:41.191 [2024-12-14 01:32:14.681203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.191 [2024-12-14 01:32:14.681257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:41.191 [2024-12-14 01:32:14.681268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:41.191 [2024-12-14 01:32:14.681276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:41.191 [2024-12-14 01:32:14.681287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.191 [2024-12-14 01:32:14.681318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:41.191 [2024-12-14 01:32:14.681327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:41.191 [2024-12-14 01:32:14.681336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:41.191 [2024-12-14 01:32:14.681344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.191 [2024-12-14 01:32:14.681383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:41.191 [2024-12-14 01:32:14.681392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:41.191 [2024-12-14 01:32:14.681400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:41.191 [2024-12-14 01:32:14.681408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.191 [2024-12-14 01:32:14.681454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:41.191 [2024-12-14 01:32:14.681468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:41.191 [2024-12-14 01:32:14.681477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:41.191 [2024-12-14 01:32:14.681484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.191 [2024-12-14 01:32:14.681616] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 54.752 ms, result 0 00:32:42.133 00:32:42.133 00:32:42.133 01:32:15 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:32:42.133 [2024-12-14 01:32:15.534891] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:32:42.133 [2024-12-14 01:32:15.535028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97999 ] 00:32:42.133 [2024-12-14 01:32:15.681758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.133 [2024-12-14 01:32:15.711144] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.395 [2024-12-14 01:32:15.826962] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:42.395 [2024-12-14 01:32:15.827053] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:42.395 [2024-12-14 01:32:15.989177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.395 [2024-12-14 01:32:15.989241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:42.395 [2024-12-14 01:32:15.989256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:42.395 [2024-12-14 01:32:15.989264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.395 [2024-12-14 01:32:15.989322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.395 [2024-12-14 01:32:15.989336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:42.395 [2024-12-14 01:32:15.989345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:32:42.395 [2024-12-14 01:32:15.989359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.395 [2024-12-14 01:32:15.989388] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:42.395 [2024-12-14 01:32:15.989804] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:42.395 [2024-12-14 01:32:15.989847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.395 [2024-12-14 01:32:15.989856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:42.395 [2024-12-14 01:32:15.989869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.469 ms 00:32:42.395 [2024-12-14 01:32:15.989878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.395 [2024-12-14 01:32:15.990153] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:42.395 [2024-12-14 01:32:15.990189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.395 [2024-12-14 01:32:15.990197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:42.395 [2024-12-14 01:32:15.990206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:32:42.395 [2024-12-14 01:32:15.990219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.395 [2024-12-14 01:32:15.990315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.395 [2024-12-14 01:32:15.990327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:42.395 [2024-12-14 01:32:15.990336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:32:42.395 [2024-12-14 01:32:15.990344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.395 [2024-12-14 01:32:15.990587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.395 [2024-12-14 01:32:15.990603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:42.395 [2024-12-14 01:32:15.990615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:32:42.395 [2024-12-14 01:32:15.990655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.395 [2024-12-14 01:32:15.990746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.395 [2024-12-14 01:32:15.990766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:42.395 [2024-12-14 01:32:15.990774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:32:42.395 [2024-12-14 01:32:15.990789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.395 [2024-12-14 01:32:15.990816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.395 [2024-12-14 01:32:15.990825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:42.395 [2024-12-14 01:32:15.990836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:42.395 [2024-12-14 01:32:15.990846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.395 [2024-12-14 01:32:15.990868] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:42.395 [2024-12-14 01:32:15.992961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.395 [2024-12-14 01:32:15.993000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:42.395 [2024-12-14 01:32:15.993015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.097 ms 00:32:42.395 [2024-12-14 01:32:15.993025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.395 [2024-12-14 01:32:15.993062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.395 [2024-12-14 01:32:15.993070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:42.395 [2024-12-14 01:32:15.993079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:42.395 [2024-12-14 01:32:15.993090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.395 [2024-12-14 01:32:15.993140] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:42.395 [2024-12-14 01:32:15.993167] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:42.395 [2024-12-14 01:32:15.993211] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:42.395 [2024-12-14 01:32:15.993227] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:42.395 [2024-12-14 01:32:15.993332] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:42.395 [2024-12-14 01:32:15.993342] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:42.395 [2024-12-14 01:32:15.993351] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:42.395 [2024-12-14 01:32:15.993362] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:42.395 [2024-12-14 01:32:15.993373] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:42.395 [2024-12-14 01:32:15.993384] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:42.395 [2024-12-14 01:32:15.993392] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:42.395 [2024-12-14 01:32:15.993405] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:42.395 [2024-12-14 01:32:15.993412] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:42.395 [2024-12-14 01:32:15.993420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.395 [2024-12-14 01:32:15.993427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:42.395 [2024-12-14 01:32:15.993438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:32:42.395 [2024-12-14 01:32:15.993448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.395 [2024-12-14 01:32:15.993530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.395 [2024-12-14 01:32:15.993538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:42.395 [2024-12-14 01:32:15.993547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:32:42.395 [2024-12-14 01:32:15.993554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.395 [2024-12-14 01:32:15.993676] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:42.395 [2024-12-14 01:32:15.993710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:42.395 [2024-12-14 01:32:15.993720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:42.395 [2024-12-14 01:32:15.993729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:42.395 [2024-12-14 01:32:15.993739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:42.395 [2024-12-14 01:32:15.993747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:42.395 [2024-12-14 01:32:15.993755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:42.395 [2024-12-14 01:32:15.993763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:42.395 [2024-12-14 01:32:15.993771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:42.395 [2024-12-14 01:32:15.993781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:42.395 [2024-12-14 01:32:15.993792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:42.395 [2024-12-14 01:32:15.993800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:42.395 [2024-12-14 01:32:15.993808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:42.395 [2024-12-14 01:32:15.993816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:42.396 [2024-12-14 01:32:15.993824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:42.396 [2024-12-14 01:32:15.993832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:42.396 [2024-12-14 01:32:15.993840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:42.396 [2024-12-14 01:32:15.993848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:42.396 [2024-12-14 01:32:15.993855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:42.396 [2024-12-14 01:32:15.993863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:42.396 [2024-12-14 01:32:15.993871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:42.396 [2024-12-14 01:32:15.993879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:42.396 [2024-12-14 01:32:15.993887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:42.396 [2024-12-14 01:32:15.993894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:42.396 [2024-12-14 01:32:15.993901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:42.396 [2024-12-14 01:32:15.993909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:42.396 [2024-12-14 01:32:15.993920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:42.396 [2024-12-14 01:32:15.993928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:42.396 [2024-12-14 01:32:15.993936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:42.396 [2024-12-14 01:32:15.993943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:42.396 [2024-12-14 01:32:15.993951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:42.396 [2024-12-14 01:32:15.993959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:42.396 [2024-12-14 01:32:15.993966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:42.396 [2024-12-14 01:32:15.993974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:42.396 [2024-12-14 01:32:15.993982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:42.396 [2024-12-14 01:32:15.993989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:42.396 [2024-12-14 01:32:15.993997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:42.396 [2024-12-14 01:32:15.994005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:42.396 [2024-12-14 01:32:15.994012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:42.396 [2024-12-14 01:32:15.994020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:42.396 [2024-12-14 01:32:15.994028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:42.396 [2024-12-14 01:32:15.994037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:42.396 [2024-12-14 01:32:15.994047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:42.396 [2024-12-14 01:32:15.994055] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:42.396 [2024-12-14 01:32:15.994068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:42.396 [2024-12-14 01:32:15.994080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:42.396 [2024-12-14 01:32:15.994091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:42.396 [2024-12-14 01:32:15.994101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:42.396 [2024-12-14 01:32:15.994109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:42.396 [2024-12-14 01:32:15.994117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:42.396 [2024-12-14 01:32:15.994125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:42.396 [2024-12-14 01:32:15.994133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:42.396 [2024-12-14 01:32:15.994141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:42.396 [2024-12-14 01:32:15.994150] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:42.396 [2024-12-14 01:32:15.994160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:42.396 [2024-12-14 01:32:15.994168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:42.396 [2024-12-14 01:32:15.994175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:42.396 [2024-12-14 01:32:15.994182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:42.396 [2024-12-14 01:32:15.994194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:42.396 [2024-12-14 01:32:15.994201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:42.396 [2024-12-14 01:32:15.994208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:42.396 [2024-12-14 01:32:15.994217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:42.396 [2024-12-14 01:32:15.994224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:42.396 [2024-12-14 01:32:15.994231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:42.396 [2024-12-14 01:32:15.994238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:42.396 [2024-12-14 01:32:15.994246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:42.396 [2024-12-14 01:32:15.994259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:42.396 [2024-12-14 01:32:15.994268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:42.396 [2024-12-14 01:32:15.994275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:42.396 [2024-12-14 01:32:15.994283] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:42.396 [2024-12-14 01:32:15.994290] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:42.396 [2024-12-14 01:32:15.994299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:42.396 [2024-12-14 01:32:15.994307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:42.396 [2024-12-14 01:32:15.994315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:42.396 [2024-12-14 01:32:15.994324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:42.396 [2024-12-14 01:32:15.994331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.396 [2024-12-14 01:32:15.994340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:42.396 [2024-12-14 01:32:15.994348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:32:42.396 [2024-12-14 01:32:15.994359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.658 [2024-12-14 01:32:16.004075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.658 [2024-12-14 01:32:16.004125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:42.658 [2024-12-14 01:32:16.004136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.672 ms 00:32:42.658 [2024-12-14 01:32:16.004144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.658 [2024-12-14 01:32:16.004225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.658 [2024-12-14 01:32:16.004233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:42.658 [2024-12-14 01:32:16.004244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:32:42.658 [2024-12-14 01:32:16.004251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.658 [2024-12-14 01:32:16.026986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.658 [2024-12-14 01:32:16.027057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:42.658 [2024-12-14 01:32:16.027073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.680 ms 00:32:42.658 [2024-12-14 01:32:16.027084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.658 [2024-12-14 01:32:16.027138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.658 [2024-12-14 01:32:16.027151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:42.658 [2024-12-14 01:32:16.027168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:42.658 [2024-12-14 01:32:16.027184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.658 [2024-12-14 01:32:16.027310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.658 [2024-12-14 01:32:16.027333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:42.658 [2024-12-14 01:32:16.027347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:32:42.658 [2024-12-14 01:32:16.027357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.658 [2024-12-14 01:32:16.027510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.658 [2024-12-14 01:32:16.027532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:42.658 [2024-12-14 01:32:16.027544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:32:42.658 [2024-12-14 01:32:16.027554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.658 [2024-12-14 01:32:16.036180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.658 [2024-12-14 01:32:16.036231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:42.658 [2024-12-14 01:32:16.036251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.602 ms 00:32:42.658 [2024-12-14 01:32:16.036261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.658 [2024-12-14 01:32:16.036405] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:42.658 [2024-12-14 01:32:16.036424] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:42.658 [2024-12-14 01:32:16.036437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.658 [2024-12-14 01:32:16.036447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:42.658 [2024-12-14 01:32:16.036457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:32:42.658 [2024-12-14 01:32:16.036469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.658 [2024-12-14 01:32:16.049147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.658 [2024-12-14 01:32:16.049199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:42.658 [2024-12-14 01:32:16.049210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.658 ms 00:32:42.658 [2024-12-14 01:32:16.049222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.658 [2024-12-14 01:32:16.049354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.658 [2024-12-14 01:32:16.049365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:42.658 [2024-12-14 01:32:16.049373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:32:42.658 [2024-12-14 01:32:16.049385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.658 [2024-12-14 01:32:16.049436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.658 [2024-12-14 01:32:16.049449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:42.658 [2024-12-14 01:32:16.049457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:42.658 [2024-12-14 01:32:16.049466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.658 [2024-12-14 01:32:16.049832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.658 [2024-12-14 01:32:16.049857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:42.658 [2024-12-14 01:32:16.049866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:32:42.658 [2024-12-14 01:32:16.049874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.658 [2024-12-14 01:32:16.049892] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:42.658 [2024-12-14 01:32:16.049901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.658 [2024-12-14 01:32:16.049913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:42.658 [2024-12-14 01:32:16.049921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:42.658 [2024-12-14 01:32:16.049929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.658 [2024-12-14 01:32:16.059108] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:42.658 [2024-12-14 01:32:16.059261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.658 [2024-12-14 01:32:16.059272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:42.658 [2024-12-14 01:32:16.059282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.314 ms 00:32:42.658 [2024-12-14 01:32:16.059289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.658 [2024-12-14 01:32:16.061843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.658 [2024-12-14 01:32:16.061880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:42.658 [2024-12-14 01:32:16.061890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.527 ms 00:32:42.658 [2024-12-14 01:32:16.061898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.658 [2024-12-14 01:32:16.061973] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:32:42.658 [2024-12-14 01:32:16.062550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.658 [2024-12-14 01:32:16.062572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:42.658 [2024-12-14 01:32:16.062582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:32:42.658 [2024-12-14 01:32:16.062593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.658 [2024-12-14 01:32:16.062637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.658 [2024-12-14 01:32:16.062646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:42.658 [2024-12-14 01:32:16.062655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:32:42.658 [2024-12-14 01:32:16.062665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.658 [2024-12-14 01:32:16.062702] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:42.659 [2024-12-14 01:32:16.062713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.659 [2024-12-14 01:32:16.062720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:42.659 [2024-12-14 01:32:16.062728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:42.659 [2024-12-14 01:32:16.062735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.659 [2024-12-14 01:32:16.068667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.659 [2024-12-14 01:32:16.068716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:42.659 [2024-12-14 01:32:16.068727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.912 ms 00:32:42.659 [2024-12-14 01:32:16.068736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.659 [2024-12-14 01:32:16.068815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.659 [2024-12-14 01:32:16.068830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:42.659 [2024-12-14 01:32:16.068839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:42.659 [2024-12-14 01:32:16.068847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.659 [2024-12-14 01:32:16.070232] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 80.384 ms, result 0 00:32:44.043  [2024-12-14T01:32:18.600Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-14T01:32:19.544Z] Copying: 32/1024 [MB] (20 MBps) [2024-12-14T01:32:20.487Z] Copying: 44/1024 [MB] (12 MBps) [2024-12-14T01:32:21.431Z] Copying: 59/1024 [MB] (14 MBps) [2024-12-14T01:32:22.374Z] Copying: 76/1024 [MB] (16 MBps) [2024-12-14T01:32:23.347Z] Copying: 88/1024 [MB] (12 MBps) [2024-12-14T01:32:24.302Z] Copying: 104/1024 [MB] (16 MBps) [2024-12-14T01:32:25.696Z] Copying: 120/1024 [MB] (15 MBps) [2024-12-14T01:32:26.269Z] Copying: 136/1024 [MB] (15 MBps) [2024-12-14T01:32:27.657Z] Copying: 154/1024 [MB] (18 MBps) [2024-12-14T01:32:28.599Z] Copying: 169/1024 [MB] (14 MBps) [2024-12-14T01:32:29.543Z] Copying: 184/1024 [MB] (15 MBps) [2024-12-14T01:32:30.488Z] Copying: 202/1024 [MB] (17 MBps) [2024-12-14T01:32:31.431Z] Copying: 212/1024 [MB] (10 MBps) [2024-12-14T01:32:32.375Z] Copying: 223/1024 [MB] (10 MBps) [2024-12-14T01:32:33.319Z] Copying: 234/1024 [MB] (10 MBps) [2024-12-14T01:32:34.264Z] Copying: 244/1024 [MB] (10 MBps) [2024-12-14T01:32:35.651Z] Copying: 255/1024 [MB] (10 MBps) [2024-12-14T01:32:36.594Z] Copying: 266/1024 [MB] (10 MBps) [2024-12-14T01:32:37.537Z] Copying: 276/1024 [MB] (10 MBps) [2024-12-14T01:32:38.481Z] Copying: 294/1024 [MB] (17 MBps) [2024-12-14T01:32:39.425Z] Copying: 311/1024 [MB] (16 MBps) [2024-12-14T01:32:40.368Z] Copying: 326/1024 [MB] (15 MBps) [2024-12-14T01:32:41.312Z] Copying: 340/1024 [MB] (14 MBps) [2024-12-14T01:32:42.698Z] Copying: 358/1024 [MB] (17 MBps) [2024-12-14T01:32:43.270Z] Copying: 378/1024 [MB] (20 MBps) [2024-12-14T01:32:44.654Z] Copying: 397/1024 [MB] (19 MBps) [2024-12-14T01:32:45.599Z] Copying: 418/1024 [MB] (20 MBps) [2024-12-14T01:32:46.543Z] Copying: 443/1024 [MB] (24 MBps) [2024-12-14T01:32:47.486Z] Copying: 458/1024 [MB] (15 MBps) [2024-12-14T01:32:48.431Z] Copying: 475/1024 [MB] (16 MBps) [2024-12-14T01:32:49.375Z] Copying: 488/1024 [MB] (13 MBps) [2024-12-14T01:32:50.349Z] Copying: 499/1024 [MB] (10 MBps) [2024-12-14T01:32:51.293Z] Copying: 510/1024 [MB] (11 MBps) [2024-12-14T01:32:52.678Z] Copying: 525/1024 [MB] (14 MBps) [2024-12-14T01:32:53.619Z] Copying: 539/1024 [MB] (13 MBps) [2024-12-14T01:32:54.560Z] Copying: 550/1024 [MB] (11 MBps) [2024-12-14T01:32:55.502Z] Copying: 561/1024 [MB] (10 MBps) [2024-12-14T01:32:56.445Z] Copying: 586/1024 [MB] (24 MBps) [2024-12-14T01:32:57.389Z] Copying: 598/1024 [MB] (12 MBps) [2024-12-14T01:32:58.332Z] Copying: 609/1024 [MB] (10 MBps) [2024-12-14T01:32:59.274Z] Copying: 620/1024 [MB] (10 MBps) [2024-12-14T01:33:00.659Z] Copying: 640/1024 [MB] (20 MBps) [2024-12-14T01:33:01.604Z] Copying: 651/1024 [MB] (10 MBps) [2024-12-14T01:33:02.545Z] Copying: 666/1024 [MB] (15 MBps) [2024-12-14T01:33:03.479Z] Copying: 679/1024 [MB] (13 MBps) [2024-12-14T01:33:04.411Z] Copying: 692/1024 [MB] (12 MBps) [2024-12-14T01:33:05.343Z] Copying: 704/1024 [MB] (12 MBps) [2024-12-14T01:33:06.284Z] Copying: 720/1024 [MB] (16 MBps) [2024-12-14T01:33:07.665Z] Copying: 735/1024 [MB] (14 MBps) [2024-12-14T01:33:08.607Z] Copying: 750/1024 [MB] (15 MBps) [2024-12-14T01:33:09.551Z] Copying: 767/1024 [MB] (17 MBps) [2024-12-14T01:33:10.494Z] Copying: 780/1024 [MB] (13 MBps) [2024-12-14T01:33:11.437Z] Copying: 793/1024 [MB] (12 MBps) [2024-12-14T01:33:12.380Z] Copying: 808/1024 [MB] (15 MBps) [2024-12-14T01:33:13.318Z] Copying: 827/1024 [MB] (18 MBps) [2024-12-14T01:33:14.703Z] Copying: 848/1024 [MB] (21 MBps) [2024-12-14T01:33:15.275Z] Copying: 868/1024 [MB] (19 MBps) [2024-12-14T01:33:16.678Z] Copying: 884/1024 [MB] (16 MBps) [2024-12-14T01:33:17.620Z] Copying: 905/1024 [MB] (20 MBps) [2024-12-14T01:33:18.564Z] Copying: 920/1024 [MB] (15 MBps) [2024-12-14T01:33:19.509Z] Copying: 931/1024 [MB] (11 MBps) [2024-12-14T01:33:20.450Z] Copying: 942/1024 [MB] (10 MBps) [2024-12-14T01:33:21.394Z] Copying: 954/1024 [MB] (12 MBps) [2024-12-14T01:33:22.337Z] Copying: 965/1024 [MB] (10 MBps) [2024-12-14T01:33:23.280Z] Copying: 975/1024 [MB] (10 MBps) [2024-12-14T01:33:24.666Z] Copying: 991/1024 [MB] (15 MBps) [2024-12-14T01:33:25.613Z] Copying: 1011/1024 [MB] (19 MBps) [2024-12-14T01:33:25.613Z] Copying: 1023/1024 [MB] (12 MBps) [2024-12-14T01:33:25.613Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-14 01:33:25.386734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:52.001 [2024-12-14 01:33:25.386818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:52.001 [2024-12-14 01:33:25.386836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:52.001 [2024-12-14 01:33:25.386848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.001 [2024-12-14 01:33:25.386876] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:52.001 [2024-12-14 01:33:25.387711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:52.001 [2024-12-14 01:33:25.387740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:52.001 [2024-12-14 01:33:25.387753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:33:52.001 [2024-12-14 01:33:25.387771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.001 [2024-12-14 01:33:25.388046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:52.001 [2024-12-14 01:33:25.388058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:52.001 [2024-12-14 01:33:25.388068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:33:52.001 [2024-12-14 01:33:25.388077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.001 [2024-12-14 01:33:25.388118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:52.001 [2024-12-14 01:33:25.388128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:52.001 [2024-12-14 01:33:25.388139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:52.001 [2024-12-14 01:33:25.388149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.001 [2024-12-14 01:33:25.388222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:52.001 [2024-12-14 01:33:25.388249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:52.001 [2024-12-14 01:33:25.388260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:33:52.001 [2024-12-14 01:33:25.388269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.001 [2024-12-14 01:33:25.388286] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:52.001 [2024-12-14 01:33:25.388301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:33:52.001 [2024-12-14 01:33:25.388317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:52.001 [2024-12-14 01:33:25.388573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.388999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:52.002 [2024-12-14 01:33:25.389306] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:52.002 [2024-12-14 01:33:25.389316] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2cfa1acf-32a7-4ab3-bb7b-0c6c0a7a7dea 00:33:52.002 [2024-12-14 01:33:25.389330] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:33:52.002 [2024-12-14 01:33:25.389339] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 5152 00:33:52.002 [2024-12-14 01:33:25.389358] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 5120 00:33:52.002 [2024-12-14 01:33:25.389372] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0063 00:33:52.002 [2024-12-14 01:33:25.389383] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:52.002 [2024-12-14 01:33:25.389394] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:52.002 [2024-12-14 01:33:25.389405] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:52.002 [2024-12-14 01:33:25.389414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:52.002 [2024-12-14 01:33:25.389423] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:52.002 [2024-12-14 01:33:25.389431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:52.002 [2024-12-14 01:33:25.389441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:52.002 [2024-12-14 01:33:25.389450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.145 ms 00:33:52.002 [2024-12-14 01:33:25.389459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.002 [2024-12-14 01:33:25.392371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:52.002 [2024-12-14 01:33:25.392569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:52.002 [2024-12-14 01:33:25.392592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.893 ms 00:33:52.002 [2024-12-14 01:33:25.392600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.002 [2024-12-14 01:33:25.392738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:52.002 [2024-12-14 01:33:25.392757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:52.002 [2024-12-14 01:33:25.392767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:33:52.002 [2024-12-14 01:33:25.392774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.002 [2024-12-14 01:33:25.400914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:52.002 [2024-12-14 01:33:25.400960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:52.002 [2024-12-14 01:33:25.400972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:52.002 [2024-12-14 01:33:25.400979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.003 [2024-12-14 01:33:25.401047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:52.003 [2024-12-14 01:33:25.401056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:52.003 [2024-12-14 01:33:25.401065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:52.003 [2024-12-14 01:33:25.401074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.003 [2024-12-14 01:33:25.401145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:52.003 [2024-12-14 01:33:25.401158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:52.003 [2024-12-14 01:33:25.401168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:52.003 [2024-12-14 01:33:25.401175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.003 [2024-12-14 01:33:25.401193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:52.003 [2024-12-14 01:33:25.401202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:52.003 [2024-12-14 01:33:25.401209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:52.003 [2024-12-14 01:33:25.401218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.003 [2024-12-14 01:33:25.415204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:52.003 [2024-12-14 01:33:25.415262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:52.003 [2024-12-14 01:33:25.415277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:52.003 [2024-12-14 01:33:25.415286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.003 [2024-12-14 01:33:25.427266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:52.003 [2024-12-14 01:33:25.427319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:52.003 [2024-12-14 01:33:25.427330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:52.003 [2024-12-14 01:33:25.427339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.003 [2024-12-14 01:33:25.427395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:52.003 [2024-12-14 01:33:25.427406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:52.003 [2024-12-14 01:33:25.427424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:52.003 [2024-12-14 01:33:25.427433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.003 [2024-12-14 01:33:25.427470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:52.003 [2024-12-14 01:33:25.427479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:52.003 [2024-12-14 01:33:25.427488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:52.003 [2024-12-14 01:33:25.427496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.003 [2024-12-14 01:33:25.427554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:52.003 [2024-12-14 01:33:25.427564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:52.003 [2024-12-14 01:33:25.427578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:52.003 [2024-12-14 01:33:25.427589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.003 [2024-12-14 01:33:25.427615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:52.003 [2024-12-14 01:33:25.427643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:52.003 [2024-12-14 01:33:25.427652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:52.003 [2024-12-14 01:33:25.427665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.003 [2024-12-14 01:33:25.427706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:52.003 [2024-12-14 01:33:25.427715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:52.003 [2024-12-14 01:33:25.427725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:52.003 [2024-12-14 01:33:25.427738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.003 [2024-12-14 01:33:25.427786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:52.003 [2024-12-14 01:33:25.427796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:52.003 [2024-12-14 01:33:25.427805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:52.003 [2024-12-14 01:33:25.427812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.003 [2024-12-14 01:33:25.427952] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 41.183 ms, result 0 00:33:52.264 00:33:52.264 00:33:52.264 01:33:25 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:54.814 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:54.814 01:33:27 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:54.814 01:33:27 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:33:54.814 01:33:27 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:54.814 01:33:28 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:54.814 01:33:28 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:54.814 01:33:28 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 96072 00:33:54.814 01:33:28 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 96072 ']' 00:33:54.814 01:33:28 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 96072 00:33:54.814 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (96072) - No such process 00:33:54.814 Process with pid 96072 is not found 00:33:54.814 01:33:28 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 96072 is not found' 00:33:54.814 01:33:28 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:33:54.814 Remove shared memory files 00:33:54.814 01:33:28 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:54.814 01:33:28 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:33:54.814 01:33:28 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_2cfa1acf-32a7-4ab3-bb7b-0c6c0a7a7dea_band_md /dev/hugepages/ftl_2cfa1acf-32a7-4ab3-bb7b-0c6c0a7a7dea_l2p_l1 /dev/hugepages/ftl_2cfa1acf-32a7-4ab3-bb7b-0c6c0a7a7dea_l2p_l2 /dev/hugepages/ftl_2cfa1acf-32a7-4ab3-bb7b-0c6c0a7a7dea_l2p_l2_ctx /dev/hugepages/ftl_2cfa1acf-32a7-4ab3-bb7b-0c6c0a7a7dea_nvc_md /dev/hugepages/ftl_2cfa1acf-32a7-4ab3-bb7b-0c6c0a7a7dea_p2l_pool /dev/hugepages/ftl_2cfa1acf-32a7-4ab3-bb7b-0c6c0a7a7dea_sb /dev/hugepages/ftl_2cfa1acf-32a7-4ab3-bb7b-0c6c0a7a7dea_sb_shm /dev/hugepages/ftl_2cfa1acf-32a7-4ab3-bb7b-0c6c0a7a7dea_trim_bitmap /dev/hugepages/ftl_2cfa1acf-32a7-4ab3-bb7b-0c6c0a7a7dea_trim_log /dev/hugepages/ftl_2cfa1acf-32a7-4ab3-bb7b-0c6c0a7a7dea_trim_md /dev/hugepages/ftl_2cfa1acf-32a7-4ab3-bb7b-0c6c0a7a7dea_vmap 00:33:54.814 01:33:28 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:33:54.814 01:33:28 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:54.814 01:33:28 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:33:54.814 00:33:54.814 real 4m15.292s 00:33:54.814 user 4m2.973s 00:33:54.814 sys 0m12.017s 00:33:54.814 01:33:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:54.814 01:33:28 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:33:54.814 ************************************ 00:33:54.814 END TEST ftl_restore_fast 00:33:54.814 ************************************ 00:33:54.814 Process with pid 87692 is not found 00:33:54.814 01:33:28 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:54.814 01:33:28 ftl -- ftl/ftl.sh@14 -- # killprocess 87692 00:33:54.814 01:33:28 ftl -- common/autotest_common.sh@954 -- # '[' -z 87692 ']' 00:33:54.814 01:33:28 ftl -- common/autotest_common.sh@958 -- # kill -0 87692 00:33:54.814 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (87692) - No such process 00:33:54.814 01:33:28 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 87692 is not found' 00:33:54.814 01:33:28 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:54.814 01:33:28 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=98738 00:33:54.814 01:33:28 ftl -- ftl/ftl.sh@20 -- # waitforlisten 98738 00:33:54.814 01:33:28 ftl -- common/autotest_common.sh@835 -- # '[' -z 98738 ']' 00:33:54.814 01:33:28 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.814 01:33:28 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:54.814 01:33:28 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.814 01:33:28 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:54.814 01:33:28 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:54.814 01:33:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:54.814 [2024-12-14 01:33:28.196676] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:54.814 [2024-12-14 01:33:28.196832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98738 ] 00:33:54.814 [2024-12-14 01:33:28.338156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.814 [2024-12-14 01:33:28.367042] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.759 01:33:29 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:55.759 01:33:29 ftl -- common/autotest_common.sh@868 -- # return 0 00:33:55.759 01:33:29 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:56.020 nvme0n1 00:33:56.020 01:33:29 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:56.020 01:33:29 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:56.020 01:33:29 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:56.281 01:33:29 ftl -- ftl/common.sh@28 -- # stores=53c45f5e-4ddd-4721-98c3-cf176fd259aa 00:33:56.281 01:33:29 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:56.281 01:33:29 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 53c45f5e-4ddd-4721-98c3-cf176fd259aa 00:33:56.281 01:33:29 ftl -- ftl/ftl.sh@23 -- # killprocess 98738 00:33:56.281 01:33:29 ftl -- common/autotest_common.sh@954 -- # '[' -z 98738 ']' 00:33:56.281 01:33:29 ftl -- common/autotest_common.sh@958 -- # kill -0 98738 00:33:56.281 01:33:29 ftl -- common/autotest_common.sh@959 -- # uname 00:33:56.281 01:33:29 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:56.281 01:33:29 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98738 00:33:56.542 01:33:29 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:56.542 killing process with pid 98738 00:33:56.542 01:33:29 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:56.542 01:33:29 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98738' 00:33:56.542 01:33:29 ftl -- common/autotest_common.sh@973 -- # kill 98738 00:33:56.542 01:33:29 ftl -- common/autotest_common.sh@978 -- # wait 98738 00:33:56.803 01:33:30 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:57.065 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:57.065 Waiting for block devices as requested 00:33:57.065 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:57.065 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:57.326 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:57.326 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:02.616 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:02.616 01:33:35 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:02.616 Remove shared memory files 00:34:02.616 01:33:35 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:02.616 01:33:35 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:02.616 01:33:35 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:02.616 01:33:35 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:02.616 01:33:35 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:02.616 01:33:35 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:02.616 00:34:02.616 real 16m38.842s 00:34:02.616 user 18m33.198s 00:34:02.616 sys 1m24.567s 00:34:02.616 ************************************ 00:34:02.616 01:33:35 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:02.616 01:33:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:02.616 END TEST ftl 00:34:02.616 ************************************ 00:34:02.616 01:33:36 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:02.616 01:33:36 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:02.616 01:33:36 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:02.616 01:33:36 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:02.616 01:33:36 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:02.616 01:33:36 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:02.617 01:33:36 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:02.617 01:33:36 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:02.617 01:33:36 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:02.617 01:33:36 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:02.617 01:33:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:02.617 01:33:36 -- common/autotest_common.sh@10 -- # set +x 00:34:02.617 01:33:36 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:02.617 01:33:36 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:02.617 01:33:36 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:02.617 01:33:36 -- common/autotest_common.sh@10 -- # set +x 00:34:04.004 INFO: APP EXITING 00:34:04.004 INFO: killing all VMs 00:34:04.004 INFO: killing vhost app 00:34:04.004 INFO: EXIT DONE 00:34:04.265 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:04.837 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:04.837 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:04.837 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:04.837 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:05.098 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:05.671 Cleaning 00:34:05.671 Removing: /var/run/dpdk/spdk0/config 00:34:05.671 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:05.671 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:05.671 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:05.671 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:05.671 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:05.671 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:05.671 Removing: /var/run/dpdk/spdk0 00:34:05.671 Removing: /var/run/dpdk/spdk_pid70672 00:34:05.671 Removing: /var/run/dpdk/spdk_pid70830 00:34:05.671 Removing: /var/run/dpdk/spdk_pid71026 00:34:05.671 Removing: /var/run/dpdk/spdk_pid71113 00:34:05.671 Removing: /var/run/dpdk/spdk_pid71136 00:34:05.671 Removing: /var/run/dpdk/spdk_pid71248 00:34:05.671 Removing: /var/run/dpdk/spdk_pid71260 00:34:05.671 Removing: /var/run/dpdk/spdk_pid71443 00:34:05.671 Removing: /var/run/dpdk/spdk_pid71516 00:34:05.671 Removing: /var/run/dpdk/spdk_pid71596 00:34:05.671 Removing: /var/run/dpdk/spdk_pid71690 00:34:05.671 Removing: /var/run/dpdk/spdk_pid71771 00:34:05.671 Removing: /var/run/dpdk/spdk_pid71810 00:34:05.671 Removing: /var/run/dpdk/spdk_pid71841 00:34:05.671 Removing: /var/run/dpdk/spdk_pid71912 00:34:05.671 Removing: /var/run/dpdk/spdk_pid72007 00:34:05.671 Removing: /var/run/dpdk/spdk_pid72432 00:34:05.671 Removing: /var/run/dpdk/spdk_pid72474 00:34:05.671 Removing: /var/run/dpdk/spdk_pid72526 00:34:05.671 Removing: /var/run/dpdk/spdk_pid72536 00:34:05.672 Removing: /var/run/dpdk/spdk_pid72594 00:34:05.672 Removing: /var/run/dpdk/spdk_pid72605 00:34:05.672 Removing: /var/run/dpdk/spdk_pid72663 00:34:05.672 Removing: /var/run/dpdk/spdk_pid72679 00:34:05.672 Removing: /var/run/dpdk/spdk_pid72725 00:34:05.672 Removing: /var/run/dpdk/spdk_pid72739 00:34:05.672 Removing: /var/run/dpdk/spdk_pid72781 00:34:05.672 Removing: /var/run/dpdk/spdk_pid72799 00:34:05.672 Removing: /var/run/dpdk/spdk_pid72926 00:34:05.672 Removing: /var/run/dpdk/spdk_pid72968 00:34:05.672 Removing: /var/run/dpdk/spdk_pid73046 00:34:05.672 Removing: /var/run/dpdk/spdk_pid73207 00:34:05.672 Removing: /var/run/dpdk/spdk_pid73280 00:34:05.672 Removing: /var/run/dpdk/spdk_pid73306 00:34:05.672 Removing: /var/run/dpdk/spdk_pid73718 00:34:05.672 Removing: /var/run/dpdk/spdk_pid73810 00:34:05.672 Removing: /var/run/dpdk/spdk_pid73908 00:34:05.672 Removing: /var/run/dpdk/spdk_pid73939 00:34:05.672 Removing: /var/run/dpdk/spdk_pid73970 00:34:05.672 Removing: /var/run/dpdk/spdk_pid74043 00:34:05.672 Removing: /var/run/dpdk/spdk_pid74679 00:34:05.672 Removing: /var/run/dpdk/spdk_pid74704 00:34:05.672 Removing: /var/run/dpdk/spdk_pid75156 00:34:05.672 Removing: /var/run/dpdk/spdk_pid75244 00:34:05.672 Removing: /var/run/dpdk/spdk_pid75343 00:34:05.672 Removing: /var/run/dpdk/spdk_pid75384 00:34:05.672 Removing: /var/run/dpdk/spdk_pid75410 00:34:05.672 Removing: /var/run/dpdk/spdk_pid75430 00:34:05.672 Removing: /var/run/dpdk/spdk_pid77262 00:34:05.672 Removing: /var/run/dpdk/spdk_pid77383 00:34:05.672 Removing: /var/run/dpdk/spdk_pid77387 00:34:05.672 Removing: /var/run/dpdk/spdk_pid77399 00:34:05.672 Removing: /var/run/dpdk/spdk_pid77449 00:34:05.672 Removing: /var/run/dpdk/spdk_pid77453 00:34:05.672 Removing: /var/run/dpdk/spdk_pid77465 00:34:05.672 Removing: /var/run/dpdk/spdk_pid77504 00:34:05.672 Removing: /var/run/dpdk/spdk_pid77508 00:34:05.672 Removing: /var/run/dpdk/spdk_pid77520 00:34:05.672 Removing: /var/run/dpdk/spdk_pid77559 00:34:05.672 Removing: /var/run/dpdk/spdk_pid77563 00:34:05.672 Removing: /var/run/dpdk/spdk_pid77575 00:34:05.672 Removing: /var/run/dpdk/spdk_pid78969 00:34:05.672 Removing: /var/run/dpdk/spdk_pid79055 00:34:05.672 Removing: /var/run/dpdk/spdk_pid80448 00:34:05.672 Removing: /var/run/dpdk/spdk_pid82180 00:34:05.672 Removing: /var/run/dpdk/spdk_pid82238 00:34:05.672 Removing: /var/run/dpdk/spdk_pid82303 00:34:05.672 Removing: /var/run/dpdk/spdk_pid82398 00:34:05.672 Removing: /var/run/dpdk/spdk_pid82484 00:34:05.672 Removing: /var/run/dpdk/spdk_pid82569 00:34:05.672 Removing: /var/run/dpdk/spdk_pid82632 00:34:05.672 Removing: /var/run/dpdk/spdk_pid82696 00:34:05.672 Removing: /var/run/dpdk/spdk_pid82789 00:34:05.672 Removing: /var/run/dpdk/spdk_pid82870 00:34:05.672 Removing: /var/run/dpdk/spdk_pid82960 00:34:05.672 Removing: /var/run/dpdk/spdk_pid83018 00:34:05.672 Removing: /var/run/dpdk/spdk_pid83083 00:34:05.672 Removing: /var/run/dpdk/spdk_pid83182 00:34:05.672 Removing: /var/run/dpdk/spdk_pid83262 00:34:05.672 Removing: /var/run/dpdk/spdk_pid83347 00:34:05.672 Removing: /var/run/dpdk/spdk_pid83410 00:34:05.934 Removing: /var/run/dpdk/spdk_pid83474 00:34:05.934 Removing: /var/run/dpdk/spdk_pid83567 00:34:05.934 Removing: /var/run/dpdk/spdk_pid83648 00:34:05.934 Removing: /var/run/dpdk/spdk_pid83738 00:34:05.934 Removing: /var/run/dpdk/spdk_pid83796 00:34:05.934 Removing: /var/run/dpdk/spdk_pid83859 00:34:05.934 Removing: /var/run/dpdk/spdk_pid83923 00:34:05.934 Removing: /var/run/dpdk/spdk_pid83992 00:34:05.934 Removing: /var/run/dpdk/spdk_pid84088 00:34:05.934 Removing: /var/run/dpdk/spdk_pid84169 00:34:05.934 Removing: /var/run/dpdk/spdk_pid84253 00:34:05.934 Removing: /var/run/dpdk/spdk_pid84310 00:34:05.934 Removing: /var/run/dpdk/spdk_pid84379 00:34:05.934 Removing: /var/run/dpdk/spdk_pid84442 00:34:05.934 Removing: /var/run/dpdk/spdk_pid84510 00:34:05.934 Removing: /var/run/dpdk/spdk_pid84603 00:34:05.934 Removing: /var/run/dpdk/spdk_pid84683 00:34:05.934 Removing: /var/run/dpdk/spdk_pid84821 00:34:05.934 Removing: /var/run/dpdk/spdk_pid85094 00:34:05.934 Removing: /var/run/dpdk/spdk_pid85115 00:34:05.934 Removing: /var/run/dpdk/spdk_pid85562 00:34:05.934 Removing: /var/run/dpdk/spdk_pid85742 00:34:05.934 Removing: /var/run/dpdk/spdk_pid85833 00:34:05.934 Removing: /var/run/dpdk/spdk_pid85932 00:34:05.934 Removing: /var/run/dpdk/spdk_pid85975 00:34:05.934 Removing: /var/run/dpdk/spdk_pid85995 00:34:05.934 Removing: /var/run/dpdk/spdk_pid86291 00:34:05.934 Removing: /var/run/dpdk/spdk_pid86329 00:34:05.934 Removing: /var/run/dpdk/spdk_pid86380 00:34:05.934 Removing: /var/run/dpdk/spdk_pid86752 00:34:05.934 Removing: /var/run/dpdk/spdk_pid86897 00:34:05.934 Removing: /var/run/dpdk/spdk_pid87692 00:34:05.934 Removing: /var/run/dpdk/spdk_pid87808 00:34:05.934 Removing: /var/run/dpdk/spdk_pid87962 00:34:05.934 Removing: /var/run/dpdk/spdk_pid88043 00:34:05.934 Removing: /var/run/dpdk/spdk_pid88334 00:34:05.934 Removing: /var/run/dpdk/spdk_pid88582 00:34:05.934 Removing: /var/run/dpdk/spdk_pid88928 00:34:05.934 Removing: /var/run/dpdk/spdk_pid89083 00:34:05.934 Removing: /var/run/dpdk/spdk_pid89237 00:34:05.934 Removing: /var/run/dpdk/spdk_pid89273 00:34:05.934 Removing: /var/run/dpdk/spdk_pid89427 00:34:05.934 Removing: /var/run/dpdk/spdk_pid89448 00:34:05.934 Removing: /var/run/dpdk/spdk_pid89484 00:34:05.934 Removing: /var/run/dpdk/spdk_pid89742 00:34:05.934 Removing: /var/run/dpdk/spdk_pid89956 00:34:05.934 Removing: /var/run/dpdk/spdk_pid90478 00:34:05.934 Removing: /var/run/dpdk/spdk_pid91164 00:34:05.934 Removing: /var/run/dpdk/spdk_pid91830 00:34:05.934 Removing: /var/run/dpdk/spdk_pid92583 00:34:05.934 Removing: /var/run/dpdk/spdk_pid92720 00:34:05.934 Removing: /var/run/dpdk/spdk_pid92796 00:34:05.934 Removing: /var/run/dpdk/spdk_pid93373 00:34:05.934 Removing: /var/run/dpdk/spdk_pid93416 00:34:05.934 Removing: /var/run/dpdk/spdk_pid93890 00:34:05.934 Removing: /var/run/dpdk/spdk_pid94342 00:34:05.934 Removing: /var/run/dpdk/spdk_pid95139 00:34:05.934 Removing: /var/run/dpdk/spdk_pid95261 00:34:05.934 Removing: /var/run/dpdk/spdk_pid95293 00:34:05.934 Removing: /var/run/dpdk/spdk_pid95361 00:34:05.934 Removing: /var/run/dpdk/spdk_pid95407 00:34:05.934 Removing: /var/run/dpdk/spdk_pid95460 00:34:05.934 Removing: /var/run/dpdk/spdk_pid95670 00:34:05.934 Removing: /var/run/dpdk/spdk_pid95742 00:34:05.934 Removing: /var/run/dpdk/spdk_pid95798 00:34:05.934 Removing: /var/run/dpdk/spdk_pid95848 00:34:05.934 Removing: /var/run/dpdk/spdk_pid95883 00:34:05.934 Removing: /var/run/dpdk/spdk_pid95944 00:34:05.934 Removing: /var/run/dpdk/spdk_pid96072 00:34:05.934 Removing: /var/run/dpdk/spdk_pid96290 00:34:05.934 Removing: /var/run/dpdk/spdk_pid96702 00:34:05.934 Removing: /var/run/dpdk/spdk_pid97381 00:34:05.934 Removing: /var/run/dpdk/spdk_pid97999 00:34:05.934 Removing: /var/run/dpdk/spdk_pid98738 00:34:05.934 Clean 00:34:06.196 01:33:39 -- common/autotest_common.sh@1453 -- # return 0 00:34:06.196 01:33:39 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:06.196 01:33:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:06.196 01:33:39 -- common/autotest_common.sh@10 -- # set +x 00:34:06.196 01:33:39 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:06.196 01:33:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:06.196 01:33:39 -- common/autotest_common.sh@10 -- # set +x 00:34:06.196 01:33:39 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:06.196 01:33:39 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:06.196 01:33:39 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:06.196 01:33:39 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:06.196 01:33:39 -- spdk/autotest.sh@398 -- # hostname 00:34:06.196 01:33:39 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:06.196 geninfo: WARNING: invalid characters removed from testname! 00:34:32.827 01:34:05 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:35.372 01:34:08 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:37.922 01:34:10 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:39.837 01:34:13 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:42.384 01:34:15 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:43.770 01:34:17 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:46.316 01:34:19 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:46.316 01:34:19 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:46.316 01:34:19 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:46.316 01:34:19 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:46.316 01:34:19 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:46.316 01:34:19 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:46.316 + [[ -n 5743 ]] 00:34:46.316 + sudo kill 5743 00:34:46.326 [Pipeline] } 00:34:46.338 [Pipeline] // timeout 00:34:46.344 [Pipeline] } 00:34:46.356 [Pipeline] // stage 00:34:46.361 [Pipeline] } 00:34:46.375 [Pipeline] // catchError 00:34:46.383 [Pipeline] stage 00:34:46.385 [Pipeline] { (Stop VM) 00:34:46.397 [Pipeline] sh 00:34:46.681 + vagrant halt 00:34:49.222 ==> default: Halting domain... 00:34:54.525 [Pipeline] sh 00:34:54.809 + vagrant destroy -f 00:34:57.354 ==> default: Removing domain... 00:34:58.310 [Pipeline] sh 00:34:58.597 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:58.608 [Pipeline] } 00:34:58.622 [Pipeline] // stage 00:34:58.627 [Pipeline] } 00:34:58.641 [Pipeline] // dir 00:34:58.646 [Pipeline] } 00:34:58.659 [Pipeline] // wrap 00:34:58.760 [Pipeline] } 00:34:58.772 [Pipeline] // catchError 00:34:58.780 [Pipeline] stage 00:34:58.782 [Pipeline] { (Epilogue) 00:34:58.793 [Pipeline] sh 00:34:59.151 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:04.442 [Pipeline] catchError 00:35:04.444 [Pipeline] { 00:35:04.456 [Pipeline] sh 00:35:04.740 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:04.740 Artifacts sizes are good 00:35:04.750 [Pipeline] } 00:35:04.763 [Pipeline] // catchError 00:35:04.773 [Pipeline] archiveArtifacts 00:35:04.780 Archiving artifacts 00:35:04.869 [Pipeline] cleanWs 00:35:04.880 [WS-CLEANUP] Deleting project workspace... 00:35:04.880 [WS-CLEANUP] Deferred wipeout is used... 00:35:04.887 [WS-CLEANUP] done 00:35:04.889 [Pipeline] } 00:35:04.902 [Pipeline] // stage 00:35:04.906 [Pipeline] } 00:35:04.918 [Pipeline] // node 00:35:04.922 [Pipeline] End of Pipeline 00:35:04.986 Finished: SUCCESS